[2024-09-13 13:02:15.450163] INFO [SERVER] main (main.cpp:564) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0] succ to init logger(default file="log/observer.log", rs file="log/rootservice.log", election file="log/election.log", trace file="log/trace.log", audit_file="audit/observer_19876_202409131302152096805136.aud", alert file="log/alert/alert.log", max_log_file_size=268435456, enable_async_log=true) [2024-09-13 13:02:15.450238] INFO [SERVER] main (main.cpp:568) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=72] Virtual memory : 721,555,456 byte [2024-09-13 13:02:15.450246] INFO [SERVER] main (main.cpp:571) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] Build basic information for each syslog file(info="address: , observer version: OceanBase_CE 4.2.4.0, revision: 100000082024070810-556a8f594436d32a23ee92289717913cf503184b, sysname: Linux, os release: 3.10.0-957.1.3.el7.x86_64, machine: x86_64, tz GMT offset: 08:00") /u01/app/observer/bin/observer -r 172.16.51.35:2882:2881;172.16.51.36:2882:2881;172.16.51.37:2882:2881 -p 2881 -P 2882 -z zone1 -n ob-poc -c 1726203323 -d /data1/oceanbase/data -I 172.16.51.35 -o __min_full_resource_pool_memory=2147483648,memory_limit=16G,system_memory=8G,datafile_size=20G,log_disk_size=20G,enable_syslog_wf=True,enable_syslog_recycle=True,max_syslog_file_count=4 observer (OceanBase_CE 4.2.4.0) REVISION: 100000082024070810-556a8f594436d32a23ee92289717913cf503184b BUILD_BRANCH: HEAD BUILD_TIME: Jul 8 2024 11:07:07 BUILD_FLAGS: RelWithDebInfo BUILD_INFO: Copyright (c) 2011-present OceanBase Inc. [2024-09-13 13:02:15.450290] INFO print_all_limits (main.cpp:368) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ============= *begin server limit report * ============= [2024-09-13 13:02:15.450297] INFO print_limit (main.cpp:356) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [operator()] RLIMIT_CORE = unlimited [2024-09-13 13:02:15.450303] INFO print_limit (main.cpp:356) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [operator()] RLIMIT_CPU = unlimited [2024-09-13 13:02:15.450307] INFO print_limit (main.cpp:356) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] [operator()] RLIMIT_DATA = unlimited [2024-09-13 13:02:15.450311] INFO print_limit (main.cpp:356) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] [operator()] RLIMIT_FSIZE = unlimited [2024-09-13 13:02:15.450314] INFO print_limit (main.cpp:356) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] [operator()] RLIMIT_LOCKS = unlimited [2024-09-13 13:02:15.450318] INFO print_limit (main.cpp:358) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] [operator()] RLIMIT_MEMLOCK = 65536 [2024-09-13 13:02:15.450323] INFO print_limit (main.cpp:358) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [operator()] RLIMIT_NOFILE = 655350 [2024-09-13 13:02:15.450327] INFO print_limit (main.cpp:358) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] [operator()] RLIMIT_NPROC = 655360 [2024-09-13 13:02:15.450331] INFO print_limit (main.cpp:356) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] [operator()] RLIMIT_STACK = unlimited [2024-09-13 13:02:15.450334] INFO print_all_limits (main.cpp:378) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ============= *stop server limit report* =============== name= (11 segments) header 0: address=0x55a370f08040 header 1: address=0x55a370f082a8 header 2: address=0x55a370f08000 header 3: address=0x55a385dcaa00 header 4: address=0x55a3862bc440 header 5: address=0x55a385dcaa00 header 6: address=0x55a3862b0e88 header 7: address=0x55a385dcaa00 header 8: address=0x55a398cb6864 header 9: address=0x55a395b08000 header 10: address=0x55a370f082c4 name= (4 segments) header 0: address=0x7fff6b387000 header 1: address=0x7fff6b387438 header 2: address=0x7fff6b387300 header 3: address=0x7fff6b38733c name=/lib64/librt.so.1 (7 segments) header 0: address=0x2b079520c000 header 1: address=0x2b0795412d40 header 2: address=0x2b0795412d70 header 3: address=0x2b079520c1c8 header 4: address=0x2b07952112c4 header 5: address=0x2b079520c000 header 6: address=0x2b0795412d40 name=/lib64/libdl.so.2 (7 segments) header 0: address=0x2b0795414000 header 1: address=0x2b0795616d58 header 2: address=0x2b0795616d88 header 3: address=0x2b07954141c8 header 4: address=0x2b0795415940 header 5: address=0x2b0795414000 header 6: address=0x2b0795616d58 name=/lib64/libm.so.6 (7 segments) header 0: address=0x2b0795618000 header 1: address=0x2b0795918d70 header 2: address=0x2b0795918d90 header 3: address=0x2b07956181c8 header 4: address=0x2b0795710648 header 5: address=0x2b0795618000 header 6: address=0x2b0795918d70 name=/u01/app/observer/lib/libmariadb.so.3 (7 segments) header 0: address=0x2b079591a000 header 1: address=0x2b0795b75268 header 2: address=0x2b0795b7bd98 header 3: address=0x2b079591a1c8 header 4: address=0x2b0795969e60 header 5: address=0x2b079591a000 header 6: address=0x2b0795b75268 name=/u01/app/observer/lib/libaio.so.1 (7 segments) header 0: address=0x2b0795b80000 header 1: address=0x2b0795d80ed0 header 2: address=0x2b0795d80ed0 header 3: address=0x2b0795b801c8 header 4: address=0x2b0795b80a90 header 5: address=0x2b0795b80000 header 6: address=0x2b0795d80ed0 name=/lib64/libpthread.so.0 (9 segments) header 0: address=0x2b0795d82040 header 1: address=0x2b0795d93250 header 2: address=0x2b0795d82000 header 3: address=0x2b0795f98b60 header 4: address=0x2b0795f98d50 header 5: address=0x2b0795d82238 header 6: address=0x2b0795d9326c header 7: address=0x2b0795d82000 header 8: address=0x2b0795f98b60 name=/lib64/libc.so.6 (10 segments) header 0: address=0x2b0795f9e040 header 1: address=0x2b079612ba90 header 2: address=0x2b0795f9e000 header 3: address=0x2b0796360720 header 4: address=0x2b0796363b80 header 5: address=0x2b0795f9e270 header 6: address=0x2b0796360720 header 7: address=0x2b079612baac header 8: address=0x2b0795f9e000 header 9: address=0x2b0796360720 name=/lib64/ld-linux-x86-64.so.2 (7 segments) header 0: address=0x2b0794fe8000 header 1: address=0x2b0795209b40 header 2: address=0x2b0795209e00 header 3: address=0x2b0794fe81c8 header 4: address=0x2b0795006ce4 header 5: address=0x2b0794fe8000 header 6: address=0x2b0795209b40 [2024-09-13 13:02:15.458220] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ObSliceAlloc init finished(bsize_=7936, isize_=40, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:15.458291] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=43] ObSliceAlloc init finished(bsize_=7936, isize_=160, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:15.458467] INFO [SERVER] main (main.cpp:593) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] observer starts(observer_version="OceanBase_CE 4.2.4.0") [2024-09-13 13:02:15.458486] INFO [SERVER] init (ob_server.cpp:261) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [OBSERVER_NOTICE] start to init observer [2024-09-13 13:02:15.458502] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.458551] INFO [SERVER] init (ob_server.cpp:265) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=38] [server_start 1/18] observer init begin. [2024-09-13 13:02:15.458639] INFO [SHARE] load_config (ob_config_manager.cpp:129) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] Config file doesn't exist, read from command line(path="etc/observer.config.bin", ret=-4027) [2024-09-13 13:02:15.458691] INFO [SERVER] parse_mode (ob_server.cpp:231) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set normal mode [2024-09-13 13:02:15.458728] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] Load config succ(name="__min_full_resource_pool_memory", value="2147483648") [2024-09-13 13:02:15.458758] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] Load config succ(name="memory_limit", value="16G") [2024-09-13 13:02:15.458766] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Load config succ(name="system_memory", value="8G") [2024-09-13 13:02:15.458774] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Load config succ(name="datafile_size", value="20G") [2024-09-13 13:02:15.458780] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] Load config succ(name="log_disk_size", value="20G") [2024-09-13 13:02:15.458787] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] Load config succ(name="enable_syslog_wf", value="True") [2024-09-13 13:02:15.458794] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Load config succ(name="enable_syslog_recycle", value="True") [2024-09-13 13:02:15.458802] INFO [SHARE] operator() (ob_common_config.cpp:370) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Load config succ(name="max_syslog_file_count", value="4") [2024-09-13 13:02:15.458818] INFO [SERVER] calc_cluster_name_hash (ob_server_reload_config.cpp:73) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] calc cluster_name_hash for rpc(cluster_name=ob-poc, cluster_name_hash=1865476510801839422) [2024-09-13 13:02:15.458833] INFO [SERVER] set_cluster_name_hash (ob_server_reload_config.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set cluster_name_hash(ret=0, ret="OB_SUCCESS", cluster_name=ob-poc, cluster_name_hash=1865476510801839422) [2024-09-13 13:02:15.458849] INFO [SERVER] init_config (ob_server.cpp:1933) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set CLUSTER_ID for rpc(cluster_id=1726203323) [2024-09-13 13:02:15.458857] INFO print (ob_server_config.cpp:158) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ===================== *begin server config report * ===================== [2024-09-13 13:02:15.458864] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] | _ob_pl_compile_max_concurrency = 4 [2024-09-13 13:02:15.458869] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _enable_dbms_job_package = True [2024-09-13 13:02:15.458882] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] | _enable_memleak_light_backtrace = True [2024-09-13 13:02:15.458889] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] | _faststack_min_interval = 30m [2024-09-13 13:02:15.458893] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _faststack_req_queue_size_threshold = 0 [2024-09-13 13:02:15.458896] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | strict_check_os_params = False [2024-09-13 13:02:15.458900] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ha_diagnose_history_recycle_interval = 7d [2024-09-13 13:02:15.458904] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _obkv_feature_mode = [2024-09-13 13:02:15.458908] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | sql_protocol_min_tls_version = none [2024-09-13 13:02:15.458911] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_persistent_compiled_routine = True [2024-09-13 13:02:15.458915] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _ob_ash_enable = True [2024-09-13 13:02:15.458921] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | _ob_ash_size = 0 [2024-09-13 13:02:15.458924] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _force_malloc_for_absent_tenant = False [2024-09-13 13:02:15.458928] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _schema_memory_recycle_interval = 15m [2024-09-13 13:02:15.458932] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _force_explict_500_malloc = False [2024-09-13 13:02:15.458935] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_backtrace_function = True [2024-09-13 13:02:15.458939] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | rpc_server_authentication_method = ALL [2024-09-13 13:02:15.458943] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | rpc_client_authentication_method = NONE [2024-09-13 13:02:15.458947] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_reserved_user_dcl_restriction = False [2024-09-13 13:02:15.458951] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_px_fast_reclaim = True [2024-09-13 13:02:15.458954] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | observer_id = 0 [2024-09-13 13:02:15.458957] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | local_ip = 172.16.51.35 [2024-09-13 13:02:15.458961] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _load_tde_encrypt_engine = NONE [2024-09-13 13:02:15.458965] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_protocol_diagnose = True [2024-09-13 13:02:15.458968] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _display_mysql_version = 5.7.25 [2024-09-13 13:02:15.458972] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_dblink = True [2024-09-13 13:02:15.458976] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _ob_enable_direct_load = True [2024-09-13 13:02:15.458980] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_plan_cache_auto_flush_interval = 0s [2024-09-13 13:02:15.458984] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | global_background_cpu_quota = -1 [2024-09-13 13:02:15.458987] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_global_background_resource_isolation = False [2024-09-13 13:02:15.458991] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_cgroup = True [2024-09-13 13:02:15.458995] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _advance_checkpoint_timeout = 30m [2024-09-13 13:02:15.458998] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _endpoint_tenant_mapping = [2024-09-13 13:02:15.459002] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_tenant_sql_net_thread = True [2024-09-13 13:02:15.459005] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | sql_net_thread_count = 0 [2024-09-13 13:02:15.459011] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | sql_login_thread_count = 0 [2024-09-13 13:02:15.459014] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ignore_system_memory_over_limit_error = False [2024-09-13 13:02:15.459018] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _enable_new_sql_nio = True [2024-09-13 13:02:15.459021] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _session_context_size = 10000 [2024-09-13 13:02:15.459025] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _enable_newsort = True [2024-09-13 13:02:15.459029] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_obj_dep_maint_task_interval = 1ms [2024-09-13 13:02:15.459032] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_enable_fast_parser = True [2024-09-13 13:02:15.459036] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _enable_trace_session_leak = False [2024-09-13 13:02:15.459039] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_block_file_punch_hole = False [2024-09-13 13:02:15.459043] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _resource_limit_spec = auto [2024-09-13 13:02:15.459047] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_resource_limit_spec = False [2024-09-13 13:02:15.459050] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_px_ordered_coord = False [2024-09-13 13:02:15.459055] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | _send_bloom_filter_size = 1024 [2024-09-13 13:02:15.459059] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | tcp_keepcnt = 10 [2024-09-13 13:02:15.459063] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | tcp_keepintvl = 6s [2024-09-13 13:02:15.459066] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | tcp_keepidle = 7200s [2024-09-13 13:02:15.459070] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_tcp_keepalive = True [2024-09-13 13:02:15.459074] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | ob_ratelimit_stat_period = 1s [2024-09-13 13:02:15.459077] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_ob_ratelimit = False [2024-09-13 13:02:15.459081] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_easy_keepalive = True [2024-09-13 13:02:15.459085] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _xa_gc_interval = 1h [2024-09-13 13:02:15.459088] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _xa_gc_timeout = 24h [2024-09-13 13:02:15.459092] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_ssl_invited_nodes = NONE [2024-09-13 13:02:15.459095] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | ssl_external_kms_info = [2024-09-13 13:02:15.459099] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | use_large_pages = false [2024-09-13 13:02:15.459105] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | _enable_oracle_priv_check = True [2024-09-13 13:02:15.459108] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | schema_history_recycle_interval = 10m [2024-09-13 13:02:15.459112] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_plan_cache_mem_diagnosis = False [2024-09-13 13:02:15.459116] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_plan_cache_gc_strategy = REPORT [2024-09-13 13:02:15.459119] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _upgrade_stage = NONE [2024-09-13 13:02:15.459123] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_enable_prepared_statement = True [2024-09-13 13:02:15.459127] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _backup_idle_time = 5m [2024-09-13 13:02:15.459130] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _restore_idle_time = 1m [2024-09-13 13:02:15.459134] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _bloom_filter_ratio = 35 [2024-09-13 13:02:15.459137] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_ddl_timeout = 1000s [2024-09-13 13:02:15.459141] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | use_ipv6 = False [2024-09-13 13:02:15.459144] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | ssl_client_authentication = False [2024-09-13 13:02:15.459148] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _px_max_pipeline_depth = 2 [2024-09-13 13:02:15.459152] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | __easy_memory_reserved_percentage = 0 [2024-09-13 13:02:15.459156] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | stack_size = 512K [2024-09-13 13:02:15.459159] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | __easy_memory_limit = 4G [2024-09-13 13:02:15.459163] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _max_elr_dependent_trx_count = 0 [2024-09-13 13:02:15.459166] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | px_task_size = 2M [2024-09-13 13:02:15.459170] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | dtl_buffer_size = 64K [2024-09-13 13:02:15.459174] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _cache_wash_interval = 200ms [2024-09-13 13:02:15.459177] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _migrate_block_verify_level = 1 [2024-09-13 13:02:15.459181] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | micro_block_merge_verify_level = 2 [2024-09-13 13:02:15.459184] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | builtin_db_data_verify_cycle = 20 [2024-09-13 13:02:15.459188] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | tablet_size = 128M [2024-09-13 13:02:15.459194] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | sys_bkgd_migration_change_member_list_timeout = 20s [2024-09-13 13:02:15.459197] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | sys_bkgd_migration_retry_num = 3 [2024-09-13 13:02:15.459201] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _ob_elr_fast_freeze_threshold = 500000 [2024-09-13 13:02:15.459205] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _fast_commit_callback_count = 10000 [2024-09-13 13:02:15.459208] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _private_buffer_size = 16K [2024-09-13 13:02:15.459212] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _force_skip_encoding_partition_id = [2024-09-13 13:02:15.459215] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_compaction_diagnose = False [2024-09-13 13:02:15.459219] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | disk_io_thread_count = 8 [2024-09-13 13:02:15.459223] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | sys_bkgd_net_percentage = 60 [2024-09-13 13:02:15.459226] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | data_disk_usage_limit_percentage = 90 [2024-09-13 13:02:15.459230] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | data_storage_error_tolerance_time = 300s [2024-09-13 13:02:15.459233] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | data_storage_warning_tolerance_time = 5s [2024-09-13 13:02:15.459237] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _data_storage_io_timeout = 10s [2024-09-13 13:02:15.459241] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | storage_meta_cache_priority = 10 [2024-09-13 13:02:15.459244] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | fuse_row_cache_priority = 1 [2024-09-13 13:02:15.459248] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | bf_cache_miss_count_threshold = 100 [2024-09-13 13:02:15.459251] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | bf_cache_priority = 1 [2024-09-13 13:02:15.459255] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | user_row_cache_priority = 1 [2024-09-13 13:02:15.459258] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | user_block_cache_priority = 1 [2024-09-13 13:02:15.459262] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | index_block_cache_priority = 10 [2024-09-13 13:02:15.459266] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | opt_tab_stat_cache_priority = 1 [2024-09-13 13:02:15.459269] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | tablet_ls_cache_priority = 1000 [2024-09-13 13:02:15.459273] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _auto_broadcast_tablet_location_rate_limit = 10000 [2024-09-13 13:02:15.459276] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _auto_refresh_tablet_location_interval = 10m [2024-09-13 13:02:15.459280] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | location_cache_refresh_sql_timeout = 1s [2024-09-13 13:02:15.459286] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | location_cache_refresh_rpc_timeout = 500ms [2024-09-13 13:02:15.459289] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | all_server_list = [2024-09-13 13:02:15.459293] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | location_cache_refresh_min_interval = 100ms [2024-09-13 13:02:15.459297] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | location_fetch_concurrency = 20 [2024-09-13 13:02:15.459301] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | location_refresh_thread_count = 2 [2024-09-13 13:02:15.459304] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | virtual_table_location_cache_expire_time = 8s [2024-09-13 13:02:15.459308] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _server_standby_fetch_log_bandwidth_limit = 0MB [2024-09-13 13:02:15.459311] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | standby_fetch_log_bandwidth_limit = 0MB [2024-09-13 13:02:15.459315] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _max_rpc_packet_size = 2047MB [2024-09-13 13:02:15.459318] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_pkt_nio = True [2024-09-13 13:02:15.459322] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | rpc_timeout = 2s [2024-09-13 13:02:15.459326] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _ob_get_gts_ahead_interval = 0s [2024-09-13 13:02:15.459330] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _parallel_redo_logging_trigger = 16M [2024-09-13 13:02:15.459333] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_parallel_redo_logging = True [2024-09-13 13:02:15.459337] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_trans_rpc_timeout = 3s [2024-09-13 13:02:15.459340] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _rpc_checksum = Force [2024-09-13 13:02:15.459344] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | ignore_replay_checksum_error = False [2024-09-13 13:02:15.459347] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | row_compaction_update_limit = 6 [2024-09-13 13:02:15.459351] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | clog_sync_time_warn_threshold = 100ms [2024-09-13 13:02:15.459355] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | trx_2pc_retry_interval = 100ms [2024-09-13 13:02:15.459359] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_sys_unit_standalone = False [2024-09-13 13:02:15.459362] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _lcl_op_interval = 30ms [2024-09-13 13:02:15.459366] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | server_balance_cpu_mem_tolerance_percent = 5 [2024-09-13 13:02:15.459369] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | server_balance_disk_tolerance_percent = 1 [2024-09-13 13:02:15.459375] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] | server_balance_critical_disk_waterlevel = 80 [2024-09-13 13:02:15.459379] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | __min_full_resource_pool_memory = 2147483648 [2024-09-13 13:02:15.459382] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | __balance_controller = [2024-09-13 13:02:15.459386] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | balancer_log_interval = 1m [2024-09-13 13:02:15.459390] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | balancer_task_timeout = 20m [2024-09-13 13:02:15.459393] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | balancer_tolerance_percentage = 10 [2024-09-13 13:02:15.459397] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_rereplication = True [2024-09-13 13:02:15.459400] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | resource_hard_limit = 100 [2024-09-13 13:02:15.459404] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | log_storage_warning_tolerance_time = 5s [2024-09-13 13:02:15.459407] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | log_disk_percentage = 0 [2024-09-13 13:02:15.459411] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | log_disk_size = 20G [2024-09-13 13:02:15.459414] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | recyclebin_object_expire_time = 0s [2024-09-13 13:02:15.459418] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _recyclebin_object_purge_frequency = 10m [2024-09-13 13:02:15.459422] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | ob_event_history_recycle_interval = 7d [2024-09-13 13:02:15.459425] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_major_freeze = True [2024-09-13 13:02:15.459429] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_ddl = True [2024-09-13 13:02:15.459433] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | min_observer_version = 4.2.4.0 [2024-09-13 13:02:15.459444] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] | tablet_meta_table_check_interval = 30m [2024-09-13 13:02:15.459448] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | ls_meta_table_check_interval = 1s [2024-09-13 13:02:15.459451] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | tablet_meta_table_scan_batch_count = 999 [2024-09-13 13:02:15.459455] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | rootservice_ready_check_interval = 3s [2024-09-13 13:02:15.459458] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | server_check_interval = 30s [2024-09-13 13:02:15.459462] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | migration_disable_time = 3600s [2024-09-13 13:02:15.459465] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | server_permanent_offline_time = 3600s [2024-09-13 13:02:15.459469] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_sys_table_ddl = False [2024-09-13 13:02:15.459475] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] | rootservice_async_task_queue_size = 16384 [2024-09-13 13:02:15.459479] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | rootservice_async_task_thread_count = 4 [2024-09-13 13:02:15.459482] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | lease_time = 10s [2024-09-13 13:02:15.459486] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _px_chunklist_count_ratio = 1 [2024-09-13 13:02:15.459490] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | plan_cache_evict_interval = 5s [2024-09-13 13:02:15.459493] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | data_disk_write_limit_percentage = 0 [2024-09-13 13:02:15.459497] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | memstore_limit_percentage = 0 [2024-09-13 13:02:15.459500] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _ob_max_thread_num = 0 [2024-09-13 13:02:15.459504] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | large_query_threshold = 5s [2024-09-13 13:02:15.459508] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | large_query_worker_percentage = 30 [2024-09-13 13:02:15.459511] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | workers_per_cpu_quota = 10 [2024-09-13 13:02:15.459515] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | location_cache_cpu_quota = 5 [2024-09-13 13:02:15.459518] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _hidden_sys_tenant_memory = 0M [2024-09-13 13:02:15.459522] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | server_cpu_quota_max = 0 [2024-09-13 13:02:15.459525] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | server_cpu_quota_min = 0 [2024-09-13 13:02:15.459529] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_values_table_folding = True [2024-09-13 13:02:15.459532] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _max_malloc_sample_interval = 256 [2024-09-13 13:02:15.459536] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _min_malloc_sample_interval = 16 [2024-09-13 13:02:15.459539] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _auto_drop_recovering_auxiliary_tenant = True [2024-09-13 13:02:15.459543] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _sql_insert_multi_values_split_opt = True [2024-09-13 13:02:15.459547] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _delay_resource_recycle_after_correctness_issue = False [2024-09-13 13:02:15.459550] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_defensive_check = 1 [2024-09-13 13:02:15.459554] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _enable_partition_level_retry = True [2024-09-13 13:02:15.459557] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _chunk_row_store_mem_limit = 0M [2024-09-13 13:02:15.459563] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] | enable_sql_operator_dump = True [2024-09-13 13:02:15.459567] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | autoinc_cache_refresh_interval = 3600s [2024-09-13 13:02:15.459571] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | memory_chunk_cache_size = 0M [2024-09-13 13:02:15.459574] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | cache_wash_threshold = 4GB [2024-09-13 13:02:15.459578] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | memory_limit_percentage = 80 [2024-09-13 13:02:15.459581] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | syslog_file_uncompressed_count = 0 [2024-09-13 13:02:15.459585] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | syslog_compress_func = none [2024-09-13 13:02:15.459589] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | syslog_disk_size = 0M [2024-09-13 13:02:15.459592] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_syslog_recycle = True [2024-09-13 13:02:15.459596] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_syslog_wf = True [2024-09-13 13:02:15.459599] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_async_syslog = True [2024-09-13 13:02:15.459603] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | max_syslog_file_count = 4 [2024-09-13 13:02:15.459606] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | diag_syslog_per_error_limit = 200 [2024-09-13 13:02:15.459610] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | syslog_io_bandwidth_limit = 30MB [2024-09-13 13:02:15.459613] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | alert_log_level = INFO [2024-09-13 13:02:15.459617] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | syslog_level = WDIAG [2024-09-13 13:02:15.459620] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | obconfig_url = [2024-09-13 13:02:15.459624] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | cluster_id = 1726203323 [2024-09-13 13:02:15.459627] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | cluster = ob-poc [2024-09-13 13:02:15.459631] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | rootservice_list = 172.16.51.35:2882:2881;172.16.51.36:2882:2881;172.16.51.37:2882:2881 [2024-09-13 13:02:15.459635] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | weak_read_version_refresh_interval = 100ms [2024-09-13 13:02:15.459639] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | default_compress = archive [2024-09-13 13:02:15.459642] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | default_row_format = dynamic [2024-09-13 13:02:15.459646] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | default_compress_func = zstd_1.3.8 [2024-09-13 13:02:15.459652] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | schema_history_expire_time = 7d [2024-09-13 13:02:15.459655] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_upgrade_mode = False [2024-09-13 13:02:15.459659] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_perf_event = True [2024-09-13 13:02:15.459663] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | dead_socket_detection_timeout = 3s [2024-09-13 13:02:15.459666] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | debug_sync_timeout = 0 [2024-09-13 13:02:15.459670] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_rich_error_msg = False [2024-09-13 13:02:15.459674] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_record_trace_id = False [2024-09-13 13:02:15.459677] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | enable_sql_audit = True [2024-09-13 13:02:15.459681] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | max_string_print_length = 500 [2024-09-13 13:02:15.459684] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | enable_record_trace_log = True [2024-09-13 13:02:15.459688] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | trace_log_slow_query_watermark = 1s [2024-09-13 13:02:15.459691] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | cpu_count = 0 [2024-09-13 13:02:15.459696] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] | system_memory = 8G [2024-09-13 13:02:15.459699] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | memory_limit = 16G [2024-09-13 13:02:15.459703] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | tenant_task_queue_size = 16384 [2024-09-13 13:02:15.459706] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | high_priority_net_thread_count = 0 [2024-09-13 13:02:15.459710] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | net_thread_count = 0 [2024-09-13 13:02:15.459714] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | internal_sql_execute_timeout = 30s [2024-09-13 13:02:15.459717] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | ob_startup_mode = NORMAL [2024-09-13 13:02:15.459721] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | zone = zone1 [2024-09-13 13:02:15.459724] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | devname = bond0 [2024-09-13 13:02:15.459728] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | mysql_port = 2881 [2024-09-13 13:02:15.459731] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | rpc_port = 2882 [2024-09-13 13:02:15.459735] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | leak_mod_to_check = NONE [2024-09-13 13:02:15.459739] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | config_additional_dir = etc2;etc3 [2024-09-13 13:02:15.459748] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] | memory_reserved = 500M [2024-09-13 13:02:15.459752] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | _datafile_usage_lower_bound_percentage = 10 [2024-09-13 13:02:15.459756] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | _datafile_usage_upper_bound_percentage = 90 [2024-09-13 13:02:15.459759] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | datafile_disk_percentage = 0 [2024-09-13 13:02:15.459763] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | datafile_maxsize = 0 [2024-09-13 13:02:15.459766] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | datafile_next = 0 [2024-09-13 13:02:15.459770] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | datafile_size = 20G [2024-09-13 13:02:15.459773] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] | redundancy_level = NORMAL [2024-09-13 13:02:15.459777] INFO print (ob_server_config.cpp:164) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] | data_dir = /data1/oceanbase/data [2024-09-13 13:02:15.459783] INFO print (ob_server_config.cpp:167) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ===================== *stop server config report* ======================= [2024-09-13 13:02:15.459887] WARN [SERVER] init_config (ob_server.cpp:1963) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3][errcode=-4187] Item not match(the devname has been rewritten, and the new value comes from local_ip, old value="eth0", new value="eth0", local_ip="172.16.51.35") [2024-09-13 13:02:15.459915] INFO [SHARE.CONFIG] reload_config (ob_server_config.cpp:361) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] update observer memory config success(memory_limit=17179869184, system_memory=8589934592, hidden_sys_memory=3221225472) [2024-09-13 13:02:15.459950] INFO set_running_mode (ob_server.cpp:2085) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] mini mode: false [2024-09-13 13:02:15.459970] INFO [SERVER] init_config (ob_server.cpp:2017) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Build basic information for each syslog file(info="address: "172.16.51.35:2882", observer version: OceanBase_CE 4.2.4.0, revision: 100000082024070810-556a8f594436d32a23ee92289717913cf503184b, sysname: Linux, os release: 3.10.0-957.1.3.el7.x86_64, machine: x86_64, tz GMT offset: 08:00") [2024-09-13 13:02:15.459977] INFO [SERVER] init_config (ob_server.cpp:2021) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] my addr(self_addr="172.16.51.35:2882") [2024-09-13 13:02:15.460994] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=0, tg=0x2b0796875e40, thread_cnt=1, tg->attr_={name:test1, type:3}) [2024-09-13 13:02:15.461039] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=41] create tg succeed(tg_id=1, tg=0x2b0796804570, thread_cnt=1, tg->attr_={name:test2, type:4}) [2024-09-13 13:02:15.461052] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=2, tg=0x2b0796804770, thread_cnt=1, tg->attr_={name:test3, type:5}) [2024-09-13 13:02:15.461059] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=3, tg=0x2b0796805e90, thread_cnt=1, tg->attr_={name:test4, type:2}) [2024-09-13 13:02:15.461071] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=4, tg=0x2b0796890030, thread_cnt=1, tg->attr_={name:test5, type:6}) [2024-09-13 13:02:15.461091] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] create tg succeed(tg_id=5, tg=0x2b0796892030, thread_cnt=2, tg->attr_={name:test6, type:7}) [2024-09-13 13:02:15.461099] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=6, tg=0x2b0796890b70, thread_cnt=10, tg->attr_={name:test7, type:4}) [2024-09-13 13:02:15.461107] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=7, tg=0x2b0796890d70, thread_cnt=1, tg->attr_={name:test8, type:1}) [2024-09-13 13:02:15.461112] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=8, tg=0x2b0796890eb0, thread_cnt=1, tg->attr_={name:memDump, type:2}) [2024-09-13 13:02:15.461124] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=9, tg=0x2b07968aa030, thread_cnt=1, tg->attr_={name:SchemaRefTask, type:5}) [2024-09-13 13:02:15.461130] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=10, tg=0x2b07968ab750, thread_cnt=1, tg->attr_={name:ReqMemEvict, type:3}) [2024-09-13 13:02:15.461137] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=11, tg=0x2b07968ab8c0, thread_cnt=1, tg->attr_={name:replica_control, type:2}) [2024-09-13 13:02:15.461142] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=12, tg=0x2b07968ab980, thread_cnt=1, tg->attr_={name:SyslogCompress, type:2}) [2024-09-13 13:02:15.461148] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=13, tg=0x2b07968aba40, thread_cnt=1, tg->attr_={name:testObTh, type:2}) [2024-09-13 13:02:15.461155] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=14, tg=0x2b07968abb00, thread_cnt=1, tg->attr_={name:ComTh, type:2}) [2024-09-13 13:02:15.461161] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=15, tg=0x2b07968abbc0, thread_cnt=1, tg->attr_={name:ComQueueTh, type:4}) [2024-09-13 13:02:15.461167] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=16, tg=0x2b07968abdc0, thread_cnt=1, tg->attr_={name:ComTimerTh, type:3}) [2024-09-13 13:02:15.461174] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=17, tg=0x2b07968abf30, thread_cnt=1, tg->attr_={name:Blacklist, type:2}) [2024-09-13 13:02:15.461180] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=18, tg=0x2b0796805f50, thread_cnt=1, tg->attr_={name:PartSerMigRetryQt, type:2}) [2024-09-13 13:02:15.461190] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=19, tg=0x2b0796890f70, thread_cnt=1, tg->attr_={name:TransMigrate, type:4}) [2024-09-13 13:02:15.461196] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=20, tg=0x2b0796891170, thread_cnt=1, tg->attr_={name:StandbyTimestampService, type:2}) [2024-09-13 13:02:15.461201] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=21, tg=0x2b0796891230, thread_cnt=1, tg->attr_={name:WeakRdSrv, type:2}) [2024-09-13 13:02:15.461208] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=22, tg=0x2b07968912f0, thread_cnt=1, tg->attr_={name:TransTaskWork, type:4}) [2024-09-13 13:02:15.461213] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=23, tg=0x2b07968914f0, thread_cnt=8, tg->attr_={name:DDLTaskExecutor3, type:2}) [2024-09-13 13:02:15.461223] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=24, tg=0x2b07968915b0, thread_cnt=1, tg->attr_={name:TSWorker, type:4}) [2024-09-13 13:02:15.461229] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=25, tg=0x2b07968917b0, thread_cnt=8, tg->attr_={name:BRPC, type:2}) [2024-09-13 13:02:15.461234] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=26, tg=0x2b0796891870, thread_cnt=1, tg->attr_={name:RLMGR, type:2}) [2024-09-13 13:02:15.461241] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=27, tg=0x2b0796891930, thread_cnt=3, tg->attr_={name:LeaseQueueTh, type:2}) [2024-09-13 13:02:15.461246] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=28, tg=0x2b07968919f0, thread_cnt=1, tg->attr_={name:DDLQueueTh, type:2}) [2024-09-13 13:02:15.461251] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=29, tg=0x2b0796891ab0, thread_cnt=6, tg->attr_={name:MysqlQueueTh, type:2}) [2024-09-13 13:02:15.461256] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=30, tg=0x2b0796891b70, thread_cnt=4, tg->attr_={name:DDLPQueueTh, type:2}) [2024-09-13 13:02:15.461262] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=31, tg=0x2b0796891c30, thread_cnt=2, tg->attr_={name:DiagnoseQueueTh, type:2}) [2024-09-13 13:02:15.461271] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=32, tg=0x2b07968ac030, thread_cnt=16, tg->attr_={name:DdlBuild, type:6}) [2024-09-13 13:02:15.461278] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=33, tg=0x2b07968acb70, thread_cnt=2, tg->attr_={name:LSService, type:1}) [2024-09-13 13:02:15.461284] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=34, tg=0x2b07968accb0, thread_cnt=1, tg->attr_={name:ObCreateStandbyFromNetActor, type:1}) [2024-09-13 13:02:15.461291] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=35, tg=0x2b07968acdf0, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}) [2024-09-13 13:02:15.461299] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=36, tg=0x2b07968acf30, thread_cnt=1, tg->attr_={name:IntermResGC, type:3}) [2024-09-13 13:02:15.461304] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=37, tg=0x2b07968ad0a0, thread_cnt=1, tg->attr_={name:ServerGTimer, type:3}) [2024-09-13 13:02:15.461309] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=38, tg=0x2b07968ad210, thread_cnt=1, tg->attr_={name:FreezeTimer, type:3}) [2024-09-13 13:02:15.461314] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=39, tg=0x2b07968ad380, thread_cnt=1, tg->attr_={name:SqlMemTimer, type:3}) [2024-09-13 13:02:15.461319] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=40, tg=0x2b07968ad4f0, thread_cnt=1, tg->attr_={name:ServerTracerTimer, type:3}) [2024-09-13 13:02:15.461329] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=41, tg=0x2b07968ad660, thread_cnt=1, tg->attr_={name:RSqlPool, type:3}) [2024-09-13 13:02:15.461335] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=42, tg=0x2b07968ad7d0, thread_cnt=1, tg->attr_={name:KVCacheWash, type:3}) [2024-09-13 13:02:15.461343] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=43, tg=0x2b07968ad940, thread_cnt=1, tg->attr_={name:KVCacheRep, type:3}) [2024-09-13 13:02:15.461348] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=44, tg=0x2b07968adab0, thread_cnt=1, tg->attr_={name:ObHeartbeat, type:3}) [2024-09-13 13:02:15.461355] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=45, tg=0x2b07968adc20, thread_cnt=1, tg->attr_={name:PlanCacheEvict, type:3}) [2024-09-13 13:02:15.461359] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=46, tg=0x2b07968add90, thread_cnt=1, tg->attr_={name:TabletStatRpt, type:3}) [2024-09-13 13:02:15.461366] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=47, tg=0x2b0796809ea0, thread_cnt=1, tg->attr_={name:PsCacheEvict, type:3}) [2024-09-13 13:02:15.461373] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=48, tg=0x2b0796831ea0, thread_cnt=1, tg->attr_={name:MergeLoop, type:3}) [2024-09-13 13:02:15.461379] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=49, tg=0x2b0796833ea0, thread_cnt=1, tg->attr_={name:SSTableGC, type:3}) [2024-09-13 13:02:15.461384] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=50, tg=0x2b0796835ea0, thread_cnt=1, tg->attr_={name:MediumLoop, type:3}) [2024-09-13 13:02:15.461391] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=51, tg=0x2b0796869e40, thread_cnt=1, tg->attr_={name:WriteCkpt, type:3}) [2024-09-13 13:02:15.461396] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=52, tg=0x2b079686be40, thread_cnt=1, tg->attr_={name:EXTLogWash, type:3}) [2024-09-13 13:02:15.461406] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=53, tg=0x2b079686de40, thread_cnt=1, tg->attr_={name:LineCache, type:3}) [2024-09-13 13:02:15.461411] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=54, tg=0x2b079686fe40, thread_cnt=1, tg->attr_={name:LocalityReload, type:3}) [2024-09-13 13:02:15.461416] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=55, tg=0x2b0796871e40, thread_cnt=1, tg->attr_={name:MemstoreGC, type:3}) [2024-09-13 13:02:15.461421] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=56, tg=0x2b0796873e40, thread_cnt=1, tg->attr_={name:DiskUseReport, type:3}) [2024-09-13 13:02:15.461428] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=57, tg=0x2b0796891cf0, thread_cnt=1, tg->attr_={name:CLOGReqMinor, type:3}) [2024-09-13 13:02:15.461440] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=58, tg=0x2b0796891e60, thread_cnt=1, tg->attr_={name:PGArchiveLog, type:3}) [2024-09-13 13:02:15.461448] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=59, tg=0x2b07968ae030, thread_cnt=1, tg->attr_={name:CKPTLogRep, type:3}) [2024-09-13 13:02:15.461454] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=60, tg=0x2b07968ae1a0, thread_cnt=1, tg->attr_={name:RebuildRetry, type:3}) [2024-09-13 13:02:15.461465] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] create tg succeed(tg_id=61, tg=0x2b07968ae310, thread_cnt=1, tg->attr_={name:TableMgrGC, type:3}) [2024-09-13 13:02:15.461470] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=62, tg=0x2b07968ae480, thread_cnt=1, tg->attr_={name:IndexSche, type:3}) [2024-09-13 13:02:15.461475] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=63, tg=0x2b07968ae5f0, thread_cnt=1, tg->attr_={name:FreInfoReload, type:3}) [2024-09-13 13:02:15.461483] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=64, tg=0x2b07968ae760, thread_cnt=1, tg->attr_={name:HAGtsMgr, type:3}) [2024-09-13 13:02:15.461489] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=65, tg=0x2b07968ae8d0, thread_cnt=1, tg->attr_={name:HAGtsHB, type:3}) [2024-09-13 13:02:15.461493] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=66, tg=0x2b07968aea40, thread_cnt=1, tg->attr_={name:RebuildTask, type:3}) [2024-09-13 13:02:15.461498] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=67, tg=0x2b07968aebb0, thread_cnt=1, tg->attr_={name:LogDiskMon, type:3}) [2024-09-13 13:02:15.461503] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=68, tg=0x2b07968aed20, thread_cnt=1, tg->attr_={name:ILOGFlush, type:3}) [2024-09-13 13:02:15.461508] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=69, tg=0x2b07968aee90, thread_cnt=1, tg->attr_={name:ILOGPurge, type:3}) [2024-09-13 13:02:15.461515] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=70, tg=0x2b07968af000, thread_cnt=1, tg->attr_={name:RLogClrCache, type:3}) [2024-09-13 13:02:15.461520] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=71, tg=0x2b07968af170, thread_cnt=1, tg->attr_={name:TableStatRpt, type:3}) [2024-09-13 13:02:15.461525] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=72, tg=0x2b07968af2e0, thread_cnt=1, tg->attr_={name:MacroMetaMgr, type:3}) [2024-09-13 13:02:15.461530] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=73, tg=0x2b07968af450, thread_cnt=1, tg->attr_={name:StoreFileGC, type:3}) [2024-09-13 13:02:15.461534] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=74, tg=0x2b07968af5c0, thread_cnt=1, tg->attr_={name:LeaseHB, type:3}) [2024-09-13 13:02:15.461539] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=75, tg=0x2b07968af730, thread_cnt=1, tg->attr_={name:ClusterTimer, type:3}) [2024-09-13 13:02:15.461546] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=76, tg=0x2b07968af8a0, thread_cnt=1, tg->attr_={name:MergeTimer, type:3}) [2024-09-13 13:02:15.461552] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=77, tg=0x2b07968afa10, thread_cnt=1, tg->attr_={name:CFC, type:3}) [2024-09-13 13:02:15.461557] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=78, tg=0x2b07968afb80, thread_cnt=1, tg->attr_={name:CCDF, type:3}) [2024-09-13 13:02:15.461562] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=79, tg=0x2b07968afcf0, thread_cnt=1, tg->attr_={name:LogMysqlPool, type:3}) [2024-09-13 13:02:15.461569] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=80, tg=0x2b07968afe60, thread_cnt=1, tg->attr_={name:TblCliSqlPool, type:3}) [2024-09-13 13:02:15.461574] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=81, tg=0x2b07968adf00, thread_cnt=1, tg->attr_={name:QueryExecCtxGC, type:2}) [2024-09-13 13:02:15.461582] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=82, tg=0x2b07968b0030, thread_cnt=1, tg->attr_={name:DtlDfc, type:3}) [2024-09-13 13:02:15.461587] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=83, tg=0x2b07968b01a0, thread_cnt=1, tg->attr_={name:LogIOCb, type:4}) [2024-09-13 13:02:15.461591] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=84, tg=0x2b07968b03a0, thread_cnt=1, tg->attr_={name:LogSharedQueueThread, type:4}) [2024-09-13 13:02:15.461598] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=85, tg=0x2b07968b05a0, thread_cnt=1, tg->attr_={name:ReplaySrv, type:4}) [2024-09-13 13:02:15.461604] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=86, tg=0x2b07968b07a0, thread_cnt=1, tg->attr_={name:LogRouteSrv, type:4}) [2024-09-13 13:02:15.461609] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=87, tg=0x2b07968b09a0, thread_cnt=1, tg->attr_={name:LogRouterTimer, type:3}) [2024-09-13 13:02:15.461618] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=88, tg=0x2b07968b2030, thread_cnt=4, tg->attr_={name:LSWorker, type:7}) [2024-09-13 13:02:15.461628] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=89, tg=0x2b07968ca030, thread_cnt=1, tg->attr_={name:LSIdlePool, type:7}) [2024-09-13 13:02:15.461637] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=90, tg=0x2b07968e2030, thread_cnt=1, tg->attr_={name:LSDeadPool, type:7}) [2024-09-13 13:02:15.461644] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=91, tg=0x2b07968b0b10, thread_cnt=1, tg->attr_={name:LSTimer, type:3}) [2024-09-13 13:02:15.461649] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=92, tg=0x2b07968b0c80, thread_cnt=1, tg->attr_={name:PalfGC, type:3}) [2024-09-13 13:02:15.461654] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=93, tg=0x2b07968b0df0, thread_cnt=3, tg->attr_={name:LSFreeze, type:4}) [2024-09-13 13:02:15.461661] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=94, tg=0x2b07968b0ff0, thread_cnt=1, tg->attr_={name:FetchLog, type:4}) [2024-09-13 13:02:15.461666] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=95, tg=0x2b07968b11f0, thread_cnt=1, tg->attr_={name:DagScheduler, type:2}) [2024-09-13 13:02:15.461674] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=96, tg=0x2b07968b12b0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}) [2024-09-13 13:02:15.461681] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=97, tg=0x2b07968b1370, thread_cnt=1, tg->attr_={name:RCSrv, type:4}) [2024-09-13 13:02:15.461686] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=98, tg=0x2b07968b1570, thread_cnt=1, tg->attr_={name:ApplySrv, type:4}) [2024-09-13 13:02:15.461694] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=99, tg=0x2b07968b1770, thread_cnt=1, tg->attr_={name:GlobalCtxTimer, type:3}) [2024-09-13 13:02:15.461699] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=100, tg=0x2b07968b18e0, thread_cnt=1, tg->attr_={name:StorageLogWriter, type:2}) [2024-09-13 13:02:15.461704] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=101, tg=0x2b07968b19a0, thread_cnt=1, tg->attr_={name:ReplayProcessStat, type:3}) [2024-09-13 13:02:15.461709] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=102, tg=0x2b07968b1b10, thread_cnt=1, tg->attr_={name:ActiveSessHist, type:3}) [2024-09-13 13:02:15.461714] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=103, tg=0x2b07968b1c80, thread_cnt=1, tg->attr_={name:CTASCleanUpTimer, type:3}) [2024-09-13 13:02:15.461719] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=104, tg=0x2b07968b1df0, thread_cnt=1, tg->attr_={name:DDLScanTask, type:3}) [2024-09-13 13:02:15.461726] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=105, tg=0x2b07968fa030, thread_cnt=1, tg->attr_={name:LSMetaCh, type:3}) [2024-09-13 13:02:15.461731] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=106, tg=0x2b07968fa1a0, thread_cnt=1, tg->attr_={name:TbMetaCh, type:3}) [2024-09-13 13:02:15.461735] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=107, tg=0x2b07968fa310, thread_cnt=1, tg->attr_={name:SvrMetaCh, type:3}) [2024-09-13 13:02:15.461744] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=108, tg=0x2b07968fa480, thread_cnt=1, tg->attr_={name:ArbGCTimerP, type:3}) [2024-09-13 13:02:15.461749] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=109, tg=0x2b07968fa5f0, thread_cnt=1, tg->attr_={name:DataDictTimer, type:3}) [2024-09-13 13:02:15.461754] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=110, tg=0x2b07968fa760, thread_cnt=1, tg->attr_={name:CDCSrv, type:2}) [2024-09-13 13:02:15.461759] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=111, tg=0x2b07968fa820, thread_cnt=1, tg->attr_={name:LogUpdater, type:3}) [2024-09-13 13:02:15.461764] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=112, tg=0x2b07968fa990, thread_cnt=1, tg->attr_={name:HeartBeatCheckTask, type:3}) [2024-09-13 13:02:15.461769] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=113, tg=0x2b07968fab00, thread_cnt=1, tg->attr_={name:RedefHeartBeatTask, type:3}) [2024-09-13 13:02:15.461773] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=114, tg=0x2b07968fac70, thread_cnt=1, tg->attr_={name:SSTableDefragment, type:3}) [2024-09-13 13:02:15.461778] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=115, tg=0x2b07968fade0, thread_cnt=1, tg->attr_={name:TenantMetaMemMgr, type:3}) [2024-09-13 13:02:15.461785] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=116, tg=0x2b07968faf50, thread_cnt=1, tg->attr_={name:IngressService, type:3}) [2024-09-13 13:02:15.461790] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=117, tg=0x2b07968fb0c0, thread_cnt=2, tg->attr_={name:HeartbeatService, type:1}) [2024-09-13 13:02:15.461799] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=118, tg=0x2b07968fb200, thread_cnt=1, tg->attr_={name:DetectManager, type:2}) [2024-09-13 13:02:15.461807] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=119, tg=0x2b07968fb2c0, thread_cnt=1, tg->attr_={name:ConfigMgr, type:3}) [2024-09-13 13:02:15.461813] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=120, tg=0x2b07968fb430, thread_cnt=1, tg->attr_={name:IO_TUNING, type:2}) [2024-09-13 13:02:15.461817] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=121, tg=0x2b07968fb4f0, thread_cnt=1, tg->attr_={name:IO_SCHEDULE, type:2}) [2024-09-13 13:02:15.461822] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=122, tg=0x2b07968fb5b0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}) [2024-09-13 13:02:15.461827] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=123, tg=0x2b07968fb670, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.461832] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=124, tg=0x2b07968fb730, thread_cnt=1, tg->attr_={name:IO_HEALTH, type:4}) [2024-09-13 13:02:15.461836] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=125, tg=0x2b07968fb930, thread_cnt=1, tg->attr_={name:IO_BENCHMARK, type:2}) [2024-09-13 13:02:15.461841] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=126, tg=0x2b07968fb9f0, thread_cnt=1, tg->attr_={name:TimezoneMgr, type:3}) [2024-09-13 13:02:15.461846] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=127, tg=0x2b07968fbb60, thread_cnt=1, tg->attr_={name:MasterKeyMgr, type:4}) [2024-09-13 13:02:15.461851] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=128, tg=0x2b07968fbd60, thread_cnt=1, tg->attr_={name:SrsMgr, type:3}) [2024-09-13 13:02:15.461858] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=129, tg=0x2b07968fc030, thread_cnt=1, tg->attr_={name:InfoPoolResize, type:3}) [2024-09-13 13:02:15.461863] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=130, tg=0x2b07968fc1a0, thread_cnt=1, tg->attr_={name:MinorScan, type:3}) [2024-09-13 13:02:15.461882] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=131, tg=0x2b07968fc310, thread_cnt=1, tg->attr_={name:MajorScan, type:3}) [2024-09-13 13:02:15.461889] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=132, tg=0x2b07968fc480, thread_cnt=4, tg->attr_={name:TransferSrv, type:1}) [2024-09-13 13:02:15.461894] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=133, tg=0x2b07968fc5c0, thread_cnt=1, tg->attr_={name:WrTimer, type:3}) [2024-09-13 13:02:15.461902] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=134, tg=0x2b07968fc730, thread_cnt=8, tg->attr_={name:SvrStartupHandler, type:4}) [2024-09-13 13:02:15.461907] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=135, tg=0x2b07968fc930, thread_cnt=1, tg->attr_={name:TTLManager, type:3}) [2024-09-13 13:02:15.461914] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=136, tg=0x2b07968fcaa0, thread_cnt=1, tg->attr_={name:TTLTabletMgr, type:3}) [2024-09-13 13:02:15.461919] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=137, tg=0x2b07968fcc10, thread_cnt=1, tg->attr_={name:TntSharedTimer, type:3}) [2024-09-13 13:02:15.461924] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=138, tg=0x2b07968fcd80, thread_cnt=1, tg->attr_={name:LogFetcherBGW, type:3}) [2024-09-13 13:02:15.461931] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=139, tg=0x2b07968fcef0, thread_cnt=1, tg->attr_={name:TableGroupCommitMgr, type:3}) [2024-09-13 13:02:15.461943] INFO init_config (ob_server.cpp:2042) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(lib::TGDefIDs::ServerGTimer=37, tg_name=ServerGTimer) [2024-09-13 13:02:15.462185] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19878][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=12884901888) [2024-09-13 13:02:15.462283] INFO register_pm (ob_page_manager.cpp:40) [19878][][T0][Y0-0000000000000000-0-0] [lt=30] register pm finish(ret=0, &pm=0x2b079ea56340, pm.get_tid()=19878, tenant_id=500) [2024-09-13 13:02:15.462329] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19878][][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.462348] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19878][][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.462443] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] ObTimer create success(this=0x2b07968ad0c0, thread_id=19878, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb8e87e9 0xb8dd00c 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.462457] INFO init_config (ob_server.cpp:2044) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] start tg(lib::TGDefIDs::FreezeTimer=38, tg_name=FreezeTimer) [2024-09-13 13:02:15.462631] INFO run1 (ob_timer.cpp:361) [19878][][T0][Y0-0000000000000000-0-0] [lt=4] timer thread started(this=0x2b07968ad0c0, tid=19878, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.462633] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19879][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=17179869184) [2024-09-13 13:02:15.462652] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.462658] INFO run1 (ob_timer.cpp:374) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5] dump timer info(this=0x2b07968ad0c0, tasks_num=0, wakeup_time=0) [2024-09-13 13:02:15.462700] INFO register_pm (ob_page_manager.cpp:40) [19879][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b079ead4340, pm.get_tid()=19879, tenant_id=500) [2024-09-13 13:02:15.462716] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19879][][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.462749] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07968ad230, thread_id=19879, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb8e886b 0xb8dd00c 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.462762] INFO init_config (ob_server.cpp:2046) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] start tg(lib::TGDefIDs::SqlMemTimer=39, tg_name=SqlMemTimer) [2024-09-13 13:02:15.463001] INFO run1 (ob_timer.cpp:361) [19879][][T0][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07968ad230, tid=19879, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.463016] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19880][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=21474836480) [2024-09-13 13:02:15.463025] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19879][FreezeTimer][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.463069] INFO register_pm (ob_page_manager.cpp:40) [19880][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b079eb52340, pm.get_tid()=19880, tenant_id=500) [2024-09-13 13:02:15.463079] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19880][][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.463110] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ObTimer create success(this=0x2b07968ad3a0, thread_id=19880, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb8e88ed 0xb8dd00c 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.463121] INFO init_config (ob_server.cpp:2048) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(lib::TGDefIDs::ServerTracerTimer=40, tg_name=ServerTracerTimer) [2024-09-13 13:02:15.463287] INFO run1 (ob_timer.cpp:361) [19880][][T0][Y0-0000000000000000-0-0] [lt=4] timer thread started(this=0x2b07968ad3a0, tid=19880, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.463295] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19880][SqlMemTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.463315] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19881][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=25769803776) [2024-09-13 13:02:15.463379] INFO register_pm (ob_page_manager.cpp:40) [19881][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b079ebd0340, pm.get_tid()=19881, tenant_id=500) [2024-09-13 13:02:15.463393] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19881][][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.463424] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ObTimer create success(this=0x2b07968ad510, thread_id=19881, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb8e896f 0xb8dd00c 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.463443] INFO init_config (ob_server.cpp:2050) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] start tg(lib::TGDefIDs::CTASCleanUpTimer=103, tg_name=CTASCleanUpTimer) [2024-09-13 13:02:15.463757] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19882][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=30064771072) [2024-09-13 13:02:15.463761] INFO run1 (ob_timer.cpp:361) [19881][][T0][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07968ad510, tid=19881, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.463777] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19881][ServerTracerTim][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.463802] INFO register_pm (ob_page_manager.cpp:40) [19882][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b079ee56340, pm.get_tid()=19882, tenant_id=500) [2024-09-13 13:02:15.463811] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19882][][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.463837] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] ObTimer create success(this=0x2b07968b1ca0, thread_id=19882, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb8e89f8 0xb8dd00c 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.463891] INFO init (ob_config_manager.cpp:68) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(lib::TGDefIDs::CONFIG_MGR=119, tg_name=ConfigMgr) [2024-09-13 13:02:15.464048] INFO run1 (ob_timer.cpp:361) [19882][][T0][Y0-0000000000000000-0-0] [lt=4] timer thread started(this=0x2b07968b1ca0, tid=19882, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.464059] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19882][CTASCleanUpTime][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.464089] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19883][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=34359738368) [2024-09-13 13:02:15.464135] INFO register_pm (ob_page_manager.cpp:40) [19883][][T0][Y0-0000000000000000-0-0] [lt=7] register pm finish(ret=0, &pm=0x2b079eed4340, pm.get_tid()=19883, tenant_id=500) [2024-09-13 13:02:15.464145] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19883][][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.464171] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] ObTimer create success(this=0x2b07968fb2e0, thread_id=19883, lbt()=0x24edc06b 0x13836960 0x115a4182 0x111d7e79 0xb8e8a59 0xb8dd00c 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.464325] INFO [SERVER] check_vm_max_map_count (ob_check_params.cpp:75) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [check OS params]:vm.max_map_count is within the range(max_map_count=655360) [2024-09-13 13:02:15.464376] INFO [SERVER] check_vm_min_free_kbytes (ob_check_params.cpp:100) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] [check OS params]:vm.min_free_kbytes is within the range(vm_min_free_kbytes=2097152) [2024-09-13 13:02:15.464409] INFO [SERVER] check_vm_overcommit_memory (ob_check_params.cpp:127) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [check OS params]:vm.overcommit_memory is equal to 0(vm_overcommit_memory=0) [2024-09-13 13:02:15.464462] INFO [SERVER] check_fs_file_max (ob_check_params.cpp:151) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [check OS params]:fs.file-max is greater than or equal to 6573688(fs_file_max=6573688) [2024-09-13 13:02:15.464471] INFO [SERVER] check_ulimit_open_files (ob_check_params.cpp:173) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [check OS params]:open files limit is greater than or equal to 655300(rlim.rlim_cur=655350, rlim.rlim_max=655350) [2024-09-13 13:02:15.464477] INFO [SERVER] check_ulimit_max_user_processes (ob_check_params.cpp:200) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [check OS params]:ulimit.max_user_processes is greater than or equal to 655300(rlim.rlim_cur=655360, rlim.rlim_max=655360) [2024-09-13 13:02:15.464484] INFO [SERVER] check_ulimit_core_file_size (ob_check_params.cpp:227) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] [check OS params]:core file size limit is unlimited [2024-09-13 13:02:15.464489] INFO [SERVER] check_ulimit_stack_size (ob_check_params.cpp:245) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [check OS params]:stack size limit is larger than 1M(rlim.rlim_cur=18446744073709551615, rlim.rlim_max=18446744073709551615) [2024-09-13 13:02:15.464539] INFO [SERVER] check_current_clocksource (ob_check_params.cpp:295) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [check OS params]:current_clocksource is in proper range(clocksource="kvm-clock", ret=0) [2024-09-13 13:02:15.464553] INFO [LIB] set_param (achunk_mgr.cpp:42) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set large page param(large_page_type_=0) [2024-09-13 13:02:15.464557] INFO run1 (ob_timer.cpp:361) [19883][][T0][Y0-0000000000000000-0-0] [lt=3] timer thread started(this=0x2b07968fb2e0, tid=19883, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.464562] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.464569] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19883][ConfigMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.464590] INFO [LIB] init (ob_log.cpp:1491) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] [server_start 2/18] observer syslog service init begin. [2024-09-13 13:02:15.467595] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ob_pthread_create start [2024-09-13 13:02:15.467800] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19884][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=38654705664) [2024-09-13 13:02:15.467888] INFO register_pm (ob_page_manager.cpp:40) [19884][][T0][Y0-0000000000000000-0-0] [lt=37] register pm finish(ret=0, &pm=0x2b079ef52340, pm.get_tid()=19884, tenant_id=500) [2024-09-13 13:02:15.467899] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19884][][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.467914] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=34] ob_pthread_create succeed(thread=0x2b0796931750) [2024-09-13 13:02:15.467922] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.467934] INFO [LIB] init (ob_log.cpp:1565) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [server_start 3/18] observer syslog service init success. [2024-09-13 13:02:15.467949] INFO init (ob_log_compressor.cpp:54) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] start tg(TGDefIDs::SYSLOG_COMPRESS=12, tg_name=SyslogCompress) [2024-09-13 13:02:15.468100] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19885][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=42949672960) [2024-09-13 13:02:15.468154] INFO register_pm (ob_page_manager.cpp:40) [19885][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b079efd0340, pm.get_tid()=19885, tenant_id=500) [2024-09-13 13:02:15.468168] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19885][][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.468180] INFO [LIB] init (ob_log_compressor.cpp:65) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] syslog compressor init finish (ret=0) [2024-09-13 13:02:15.468194] INFO [LIB] run1 (ob_log_compressor.cpp:195) [19885][SyslogCompress][T0][Y0-0000000000000000-0-0] [lt=7] syslog compress thread start [2024-09-13 13:02:15.468220] INFO [SERVER.OMT] init (ob_tenant_timezone.cpp:49) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] tenant timezone init(ret=0, tenant_id_=1, sizeof(ObTimeZoneInfoManager)=3072) [2024-09-13 13:02:15.468240] INFO [SERVER.OMT] add_tenant_timezone (ob_tenant_timezone_mgr.cpp:170) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] add tenant timezone success!(tenant_id=1, sizeof(ObTenantTimezone)=3200) [2024-09-13 13:02:15.468260] INFO [SQL] init_sql_factories (ob_sql_init.h:53) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] init sql factories [2024-09-13 13:02:15.472453] INFO [SQL.ENG] create_hash_table (ob_serializable_function.cpp:147) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] function serialization hash table created(func_cnt=87260, bucket_size=262144, size=7361, conflicts=330) [2024-09-13 13:02:15.608316] INFO [SHARE] change_initial_value (ob_system_variable.cpp:3269) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=36] succ to change_initial_value(version_comment="OceanBase_CE 4.2.4.0 (r100000082024070810-556a8f594436d32a23ee92289717913cf503184b) (Built Jul 8 2024 11:07:07)", system_time_zone_str="+08:00", default_coll_int_str="45", server_uuid="581afd93-718d-11ef-bdd3-fa163e45e664") [2024-09-13 13:02:15.608629] WDIAG [OCCAM] init (ob_vtable_event_recycle_buffer.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=33][errcode=-4002] invalid argument(ret=-4002, mem_tag="MdsEventCache", recycle_buffer_number=0, recycle_buffer_number=0, hash_idx_bkt_num_each=8192) [2024-09-13 13:02:15.608653] WDIAG [OCCAM] init (ob_mds_event_buffer.h:250) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=22][errcode=0] init failed(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:15.608725] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.609530] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:15.609581] INFO set_str (ob_mem_leak_checker.h:131) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] leak mod to check: NONE [2024-09-13 13:02:15.609620] INFO set_interval (ob_malloc_sample_struct.h:146) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set the min or max malloc times between two samples succeed,max_interval=256, min_interval=16 [2024-09-13 13:02:15.609949] INFO [SERVER] init_pre_setting (ob_server.cpp:2109) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] Whether record old log file(record_old_log_file=true) [2024-09-13 13:02:15.609960] INFO [SERVER] init_pre_setting (ob_server.cpp:2111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] Whether log warn(log_warn=true) [2024-09-13 13:02:15.609966] INFO [SERVER] init_pre_setting (ob_server.cpp:2114) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] Whether compress syslog file(compress_func_ptr="none") [2024-09-13 13:02:15.609982] INFO [SERVER] init_pre_setting (ob_server.cpp:2118) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] init log config(record_old_log_file=true, log_warn=true, enable_async_syslog=true, max_disk_size=0, compress_func_ptr="none", min_uncompressed_count=0) [2024-09-13 13:02:15.609991] INFO [SERVER] init_pre_setting (ob_server.cpp:2122) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] recycle log file(count=4) [2024-09-13 13:02:15.610489] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19886][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=47244640256) [2024-09-13 13:02:15.610571] INFO register_pm (ob_page_manager.cpp:40) [19886][][T0][Y0-0000000000000000-0-0] [lt=64] register pm finish(ret=0, &pm=0x2b07a1a56340, pm.get_tid()=19886, tenant_id=500) [2024-09-13 13:02:15.610589] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19886][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.610614] INFO [SERVER] init_pre_setting (ob_server.cpp:2154) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set limit memory(limit_memory=17179869184) [2024-09-13 13:02:15.610631] INFO [SERVER] init_pre_setting (ob_server.cpp:2156) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.610830] INFO [STORAGE] init_local_dirs (ob_file_system_router.cpp:161) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=1] succeed to construct local dir(data_dir_="/data1/oceanbase/data", slog_dir_="/data1/oceanbase/data/slog", clog_dir_="/data1/oceanbase/data/clog", sstable_dir_="/data1/oceanbase/data/sstable") [2024-09-13 13:02:15.611042] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19887][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=51539607552) [2024-09-13 13:02:15.611092] INFO register_pm (ob_page_manager.cpp:40) [19887][][T0][Y0-0000000000000000-0-0] [lt=39] register pm finish(ret=0, &pm=0x2b07a1ad4340, pm.get_tid()=19887, tenant_id=500) [2024-09-13 13:02:15.611107] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.611130] INFO [COMMON] run1 (ob_io_struct.cpp:819) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=9] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.611148] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.611211] WDIAG [STORAGE.BLKMGR] get_all_macro_ids (ob_block_manager.cpp:576) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=0][errcode=-4006] fail to for each block map(ret=-4006) [2024-09-13 13:02:15.611229] WDIAG [COMMON] send_detect_task (ob_io_struct.cpp:791) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] fail to get macro ids(ret=-4006, macro_ids=[]) [2024-09-13 13:02:15.611246] WDIAG [COMMON] run1 (ob_io_struct.cpp:826) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] fail to send detect task(ret=-4006) [2024-09-13 13:02:15.611284] INFO [COMMON] init_macro_pool (ob_io_struct.cpp:352) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=1] succ to init io macro pool(memory_limit=536870912, block_count=2) [2024-09-13 13:02:15.611819] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=1][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=272) [2024-09-13 13:02:15.611837] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] create tg succeed(tg_id=272, tg=0x2b07969d3df0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969d3df0) [2024-09-13 13:02:15.611846] INFO init (ob_io_struct.cpp:2551) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=272, tg_name=IO_CALLBACK) [2024-09-13 13:02:15.612007] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19888][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=55834574848) [2024-09-13 13:02:15.612078] INFO register_pm (ob_page_manager.cpp:40) [19888][][T0][Y0-0000000000000000-0-0] [lt=36] register pm finish(ret=0, &pm=0x2b07a1b52340, pm.get_tid()=19888, tenant_id=500) [2024-09-13 13:02:15.612092] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19888][][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.612107] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19888][DiskCB][T0][Y0-0000000000000000-0-0] [lt=8] io callback thread started [2024-09-13 13:02:15.612184] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=273) [2024-09-13 13:02:15.612336] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19889][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=60129542144) [2024-09-13 13:02:15.612413] INFO register_pm (ob_page_manager.cpp:40) [19889][][T0][Y0-0000000000000000-0-0] [lt=40] register pm finish(ret=0, &pm=0x2b07a1bd0340, pm.get_tid()=19889, tenant_id=500) [2024-09-13 13:02:15.612424] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19889][][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.612452] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19889][DiskCB][T0][Y0-0000000000000000-0-0] [lt=7] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.612531] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=274) [2024-09-13 13:02:15.612548] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] create tg succeed(tg_id=274, tg=0x2b07969dbed0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969dbed0) [2024-09-13 13:02:15.612729] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19890][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=64424509440) [2024-09-13 13:02:15.612790] INFO register_pm (ob_page_manager.cpp:40) [19890][][T0][Y0-0000000000000000-0-0] [lt=48] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.612804] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19890][][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.613256] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=1][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=275) [2024-09-13 13:02:15.613268] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] create tg succeed(tg_id=275, tg=0x2b07969d7ed0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969d7ed0) [2024-09-13 13:02:15.613273] INFO init (ob_io_struct.cpp:2551) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=275, tg_name=IO_CALLBACK) [2024-09-13 13:02:15.613410] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19891][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=68719476736) [2024-09-13 13:02:15.613487] INFO register_pm (ob_page_manager.cpp:40) [19891][][T0][Y0-0000000000000000-0-0] [lt=64] register pm finish(ret=0, &pm=0x2b07a24d4340, pm.get_tid()=19891, tenant_id=500) [2024-09-13 13:02:15.613502] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19891][][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.613519] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19891][DiskCB][T0][Y0-0000000000000000-0-0] [lt=11] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.613948] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=276) [2024-09-13 13:02:15.613961] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=276, tg=0x2b07969e7cb0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969e7cb0) [2024-09-13 13:02:15.613966] INFO init (ob_io_struct.cpp:2551) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=276, tg_name=IO_CALLBACK) [2024-09-13 13:02:15.614097] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19892][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=73014444032) [2024-09-13 13:02:15.614140] INFO register_pm (ob_page_manager.cpp:40) [19892][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07a2552340, pm.get_tid()=19892, tenant_id=500) [2024-09-13 13:02:15.614157] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19892][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.614168] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19892][DiskCB][T0][Y0-0000000000000000-0-0] [lt=7] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.614602] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=277) [2024-09-13 13:02:15.614612] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=277, tg=0x2b07969cbd90, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969cbd90) [2024-09-13 13:02:15.614617] INFO init (ob_io_struct.cpp:2551) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=277, tg_name=IO_CALLBACK) [2024-09-13 13:02:15.614754] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19893][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=77309411328) [2024-09-13 13:02:15.614823] INFO register_pm (ob_page_manager.cpp:40) [19893][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07a25d0340, pm.get_tid()=19893, tenant_id=500) [2024-09-13 13:02:15.614845] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19893][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.614884] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19893][DiskCB][T0][Y0-0000000000000000-0-0] [lt=12] io callback thread started [2024-09-13 13:02:15.615316] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=278) [2024-09-13 13:02:15.615328] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=278, tg=0x2b07969cded0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969cded0) [2024-09-13 13:02:15.615334] INFO init (ob_io_struct.cpp:2551) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=278, tg_name=IO_CALLBACK) [2024-09-13 13:02:15.615683] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19894][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=81604378624) [2024-09-13 13:02:15.615729] INFO register_pm (ob_page_manager.cpp:40) [19894][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07a2e56340, pm.get_tid()=19894, tenant_id=500) [2024-09-13 13:02:15.615749] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19894][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.615766] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19894][DiskCB][T0][Y0-0000000000000000-0-0] [lt=12] io callback thread started [2024-09-13 13:02:15.616246] WDIAG create_tg_tenant (thread_mgr.h:1032) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3][errcode=0] create tg tenant but tenant tg helper is null(tg_def_id=122, tg_id=279) [2024-09-13 13:02:15.616262] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] create tg succeed(tg_id=279, tg=0x2b07969d9cb0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07969d9cb0) [2024-09-13 13:02:15.616267] INFO init (ob_io_struct.cpp:2551) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=279, tg_name=IO_CALLBACK) [2024-09-13 13:02:15.616498] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19895][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=85899345920) [2024-09-13 13:02:15.616549] INFO register_pm (ob_page_manager.cpp:40) [19895][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07a2ed4340, pm.get_tid()=19895, tenant_id=500) [2024-09-13 13:02:15.616567] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19895][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.616580] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [19895][DiskCB][T0][Y0-0000000000000000-0-0] [lt=8] io callback thread started [2024-09-13 13:02:15.616694] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=1][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.617471] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:15.617570] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.617663] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.617730] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.617740] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.617755] WDIAG [SHARE.SCHEMA] check_inner_stat (ob_server_schema_service.cpp:285) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3][errcode=0] inner stat error(schema_service_=NULL, sql_proxy_=NULL, config_=NULL) [2024-09-13 13:02:15.617787] WDIAG [SHARE.SCHEMA] check_inner_stat (ob_multi_version_schema_service.cpp:1794) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=31][errcode=0] inner stat error(init_=false) [2024-09-13 13:02:15.617798] WDIAG [SHARE.SCHEMA] check_if_tenant_has_been_dropped (ob_multi_version_schema_service.cpp:2067) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6][errcode=-4014] inner stat error(ret=-4014) [2024-09-13 13:02:15.617810] WDIAG [SERVER] nonblock_get_leader (ob_inner_sql_connection.cpp:1906) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8][errcode=-4014] user tenant has been dropped(ret=-4014, ret="OB_INNER_STAT_ERROR", tenant_id=1) [2024-09-13 13:02:15.617821] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1817) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11][errcode=-4014] nonblock get leader failed(ret=-4014, tenant_id=1, ls_id={id:1}, cluster_id=1726203323) [2024-09-13 13:02:15.617837] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12][errcode=-4014] retry_while_no_tenant_resource failed(ret=-4014, tenant_id=1) [2024-09-13 13:02:15.617852] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13][errcode=-4014] execute_read failed(ret=-4014, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:15.617861] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6][errcode=-4014] query failed(ret=-4014, conn=0x2b07a13e0060, start=1726203735617563, sql=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:15.617888] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=25][errcode=-4014] read failed(ret=-4014) [2024-09-13 13:02:15.617897] WDIAG [COMMON] parse_calibration_table (ob_io_calibration.cpp:829) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4][errcode=-4014] query failed(ret=-4014, sql_string=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:15.617988] WDIAG [COMMON] read_from_table (ob_io_calibration.cpp:699) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13][errcode=-4014] parse calibration data failed(ret=-4014) [2024-09-13 13:02:15.618102] WDIAG crc64_sse42_dispatch (ob_crc64.cpp:1149) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10][errcode=0] Use ISAL for crc64 calculate [2024-09-13 13:02:15.618123] INFO [CLOG] update_checksum (ob_server_log_block_mgr.cpp:1529) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] update_checksum success(this={magic:19536, version:1, flag:0, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, checksum:4141593973}) [2024-09-13 13:02:15.631110] INFO [CLOG] update_log_pool_meta_guarded_by_lock_ (ob_server_log_block_mgr.cpp:877) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=21] update_log_pool_meta_guarded_by_lock_ success(ret=0, this={dir::"", dir_fd:-1, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:false}) [2024-09-13 13:02:15.644683] INFO [CLOG] scan_log_pool_dir_and_do_trim_ (ob_server_log_block_mgr.cpp:643) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=28] the log pool is empty, no need trime(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:15.646690] INFO [CLOG] deserialize (ob_server_log_block_mgr.cpp:1484) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=26] deserialize LogPoolMeta success(this={curr_total_size:0, next_total_size:0, status:0}, buf="LP") [2024-09-13 13:02:15.646715] INFO [CLOG] deserialize (ob_server_log_block_mgr.cpp:1573) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=24] deserialize LogPoolMeta success(this={magic:19536, version:1, flag:0, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, checksum:4141593973}, buf="LP") [2024-09-13 13:02:15.646730] INFO [CLOG] load_meta_ (ob_server_log_block_mgr.cpp:765) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] load_meta_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:15.646744] INFO [CLOG] try_continous_to_resize_ (ob_server_log_block_mgr.cpp:739) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] current status is normal, no need continous do resize(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:15.646761] INFO [CLOG] do_load_ (ob_server_log_block_mgr.cpp:618) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] do_load_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}, time_guard=time guard 'RestartServerBlockMgr' cost too much time, used=9572, time_dist: scan_log_disk_=7457, scan_log_pool_dir_and_do_trim_=52, load_meta_=2027, try_continous_to_resize_=22) [2024-09-13 13:02:15.646787] INFO [CLOG] init (ob_server_log_block_mgr.cpp:119) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=23] ObServerLogBlockMgr init success(this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:15.646830] INFO [SERVER] cal_all_part_disk_default_percentage (ob_server_utils.cpp:301) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] cal_all_part_disk_default_percentage succ(data_dir="/data1/oceanbase/data/sstable", clog_dir="/data1/oceanbase/data/clog", shared_mode=true, data_disk_total_size=300808052736, data_disk_default_percentage=60, clog_disk_total_size=300808052736, clog_disk_default_percentage=30) [2024-09-13 13:02:15.646848] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:337) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] decide disk size finished(suggested_disk_size=21474836480, suggested_disk_percentage=0, default_disk_percentage=60, total_space=300808052736, disk_size=21474836480) [2024-09-13 13:02:15.646855] INFO [SERVER] get_data_disk_info_in_config (ob_server_utils.cpp:128) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] get_data_disk_info_in_config(suggested_data_disk_size=21474836480, suggested_clog_disk_size=21474836480, suggested_data_disk_percentage=0, suggested_clog_disk_percentage=0, data_disk_size=21474836480, data_disk_percentage=0) [2024-09-13 13:02:15.646870] INFO [SERVER] cal_all_part_disk_default_percentage (ob_server_utils.cpp:301) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] cal_all_part_disk_default_percentage succ(data_dir="/data1/oceanbase/data/sstable", clog_dir="/data1/oceanbase/data/clog", shared_mode=true, data_disk_total_size=300808052736, data_disk_default_percentage=60, clog_disk_total_size=300808052736, clog_disk_default_percentage=30) [2024-09-13 13:02:15.646894] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:337) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=25] decide disk size finished(suggested_disk_size=21474836480, suggested_disk_percentage=0, default_disk_percentage=30, total_space=300808052736, disk_size=21474836480) [2024-09-13 13:02:15.646905] INFO [SERVER] get_log_disk_info_in_config (ob_server_utils.cpp:88) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] get_log_disk_info_in_config(suggested_data_disk_size=21474836480, suggested_clog_disk_size=21474836480, suggested_data_disk_percentage=0, suggested_clog_disk_percentage=0, log_disk_size=21474836480, log_disk_percentage=0, total_log_disk_size=300808052736) [2024-09-13 13:02:15.646918] INFO [SERVER] cal_all_part_disk_size (ob_server_utils.cpp:151) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] cal_all_part_disk_size success(suggested_data_disk_size=21474836480, suggested_clog_disk_size=21474836480, suggested_data_disk_percentage=0, suggested_clog_disk_percentage=0, data_disk_size=21474836480, log_disk_size=21474836480, data_disk_percentage=0, log_disk_percentage=0) [2024-09-13 13:02:15.646962] INFO get_device (ob_device_manager.cpp:244) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] try to init device manager! [2024-09-13 13:02:15.663149] INFO alloc_device (ob_device_manager.cpp:210) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] success insert into map!(storage_info.ptr()=0x55a371c7fe46, storage_type_prefix=local://) [2024-09-13 13:02:15.663181] INFO alloc_device (ob_device_manager.cpp:226) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=31] alloc a new device!(storage_info.ptr()=0x55a371c7fe46, storage_type_prefix=local://, avai_idx=0, device_count_=1, device_handle=0x2b07a0c32080) [2024-09-13 13:02:15.663683] INFO [SHARE] init (ob_io_device_helper.cpp:190) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] finish to init io device(ret=0, data_dir="/data1/oceanbase/data", sstable_dir="/data1/oceanbase/data/sstable", block_size=2097152, data_disk_percentage=0, data_disk_size=21474836480) [2024-09-13 13:02:15.663715] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=20] create tg succeed(tg_id=280, tg=0x2b07969e3d90, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.663790] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] start tg(tg_id_=280, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.664044] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19896][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=90194313216) [2024-09-13 13:02:15.664117] INFO register_pm (ob_page_manager.cpp:40) [19896][][T0][Y0-0000000000000000-0-0] [lt=29] register pm finish(ret=0, &pm=0x2b07a2f52340, pm.get_tid()=19896, tenant_id=500) [2024-09-13 13:02:15.664139] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19896][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.664163] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=281, tg=0x2b07969e9d90, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.664180] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19896][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=11] io get_events thread started(thread_id=0, tg_id_=280) [2024-09-13 13:02:15.664200] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] start tg(tg_id_=281, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.664422] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19897][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=94489280512) [2024-09-13 13:02:15.664504] INFO register_pm (ob_page_manager.cpp:40) [19897][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07a2fd0340, pm.get_tid()=19897, tenant_id=500) [2024-09-13 13:02:15.664522] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19897][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.664535] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19897][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=8] io get_events thread started(thread_id=0, tg_id_=281) [2024-09-13 13:02:15.664536] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] create tg succeed(tg_id=282, tg=0x2b07969f5ed0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.664575] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=282, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.664771] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19898][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=98784247808) [2024-09-13 13:02:15.664850] INFO register_pm (ob_page_manager.cpp:40) [19898][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07a3456340, pm.get_tid()=19898, tenant_id=500) [2024-09-13 13:02:15.664867] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19898][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.664895] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19898][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=24] io get_events thread started(thread_id=0, tg_id_=282) [2024-09-13 13:02:15.664898] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=283, tg=0x2b07a0877cb0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.664955] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=283, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.665162] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19899][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=103079215104) [2024-09-13 13:02:15.665252] INFO register_pm (ob_page_manager.cpp:40) [19899][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07a34d4340, pm.get_tid()=19899, tenant_id=500) [2024-09-13 13:02:15.665271] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19899][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.665283] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19899][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=8] io get_events thread started(thread_id=0, tg_id_=283) [2024-09-13 13:02:15.665285] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=284, tg=0x2b07a087bd90, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.665325] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=284, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.665533] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19900][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=107374182400) [2024-09-13 13:02:15.665607] INFO register_pm (ob_page_manager.cpp:40) [19900][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07a3552340, pm.get_tid()=19900, tenant_id=500) [2024-09-13 13:02:15.665906] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19900][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.665924] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19900][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=13] io get_events thread started(thread_id=0, tg_id_=284) [2024-09-13 13:02:15.665925] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=285, tg=0x2b07a087ded0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.665961] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=285, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.666145] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19901][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=111669149696) [2024-09-13 13:02:15.666236] INFO register_pm (ob_page_manager.cpp:40) [19901][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07a35d0340, pm.get_tid()=19901, tenant_id=500) [2024-09-13 13:02:15.666259] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19901][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.666272] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19901][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=8] io get_events thread started(thread_id=0, tg_id_=285) [2024-09-13 13:02:15.666273] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=286, tg=0x2b07a088bcb0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.666320] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=286, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.666517] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19902][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=115964116992) [2024-09-13 13:02:15.666591] INFO register_pm (ob_page_manager.cpp:40) [19902][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07a3656340, pm.get_tid()=19902, tenant_id=500) [2024-09-13 13:02:15.666608] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19902][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.666628] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19902][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=7] io get_events thread started(thread_id=0, tg_id_=286) [2024-09-13 13:02:15.666630] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=287, tg=0x2b07a088dd90, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.666677] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] start tg(tg_id_=287, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.666908] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19903][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=120259084288) [2024-09-13 13:02:15.666984] INFO register_pm (ob_page_manager.cpp:40) [19903][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07a36d4340, pm.get_tid()=19903, tenant_id=500) [2024-09-13 13:02:15.667006] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19903][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.667023] INFO [COMMON] run1 (ob_io_struct.cpp:1972) [19903][IO_GETEVENT0][T0][Y0-0000000000000000-0-0] [lt=7] io get_events thread started(thread_id=0, tg_id_=287) [2024-09-13 13:02:15.667024] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=288, tg=0x2b07a088fed0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.667039] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] start tg(tg_id_=288, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.667239] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19904][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=124554051584) [2024-09-13 13:02:15.667317] INFO register_pm (ob_page_manager.cpp:40) [19904][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07a3752340, pm.get_tid()=19904, tenant_id=500) [2024-09-13 13:02:15.667336] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19904][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.667349] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=289, tg=0x2b07a29953e0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.667361] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=289, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.667354] INFO [COMMON] run1 (ob_io_struct.cpp:2257) [19904][IO_SYNC_CH0][T0][Y0-0000000000000000-0-0] [lt=7] sync io thread started(thread_id=0, tg_id_=288) [2024-09-13 13:02:15.667562] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19905][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=128849018880) [2024-09-13 13:02:15.667637] INFO register_pm (ob_page_manager.cpp:40) [19905][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07a37d0340, pm.get_tid()=19905, tenant_id=500) [2024-09-13 13:02:15.667655] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19905][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.667666] INFO [COMMON] run1 (ob_io_struct.cpp:2257) [19905][IO_SYNC_CH0][T0][Y0-0000000000000000-0-0] [lt=7] sync io thread started(thread_id=0, tg_id_=289) [2024-09-13 13:02:15.667668] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=290, tg=0x2b07a29993e0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.667675] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=290, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.667882] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19906][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=133143986176) [2024-09-13 13:02:15.667964] INFO register_pm (ob_page_manager.cpp:40) [19906][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07a3856340, pm.get_tid()=19906, tenant_id=500) [2024-09-13 13:02:15.667980] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19906][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.667994] INFO [COMMON] run1 (ob_io_struct.cpp:2257) [19906][IO_SYNC_CH0][T0][Y0-0000000000000000-0-0] [lt=7] sync io thread started(thread_id=0, tg_id_=290) [2024-09-13 13:02:15.667995] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=291, tg=0x2b07a299d3e0, thread_cnt=1, tg->attr_={name:IO_CHANNEL, type:2}) [2024-09-13 13:02:15.668007] INFO start_thread (ob_io_struct.cpp:1864) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=291, tg_name=IO_CHANNEL) [2024-09-13 13:02:15.668194] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19907][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=137438953472) [2024-09-13 13:02:15.668268] INFO register_pm (ob_page_manager.cpp:40) [19907][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07a38d4340, pm.get_tid()=19907, tenant_id=500) [2024-09-13 13:02:15.668284] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19907][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.668298] INFO [COMMON] run1 (ob_io_struct.cpp:2257) [19907][IO_SYNC_CH0][T0][Y0-0000000000000000-0-0] [lt=7] sync io thread started(thread_id=0, tg_id_=291) [2024-09-13 13:02:15.668303] INFO [COMMON] add_device_channel (ob_io_manager.cpp:425) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] add io device channel succ(device_handle=0x2b07a0c32080) [2024-09-13 13:02:15.668334] WDIAG [SERVER.OMT] init_cgroup_root_dir_ (ob_cgroup_ctrl.cpp:854) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9][errcode=-4027] no cgroup directory found. disable cgroup support(cgroup_path="cgroup", ret=-4027) [2024-09-13 13:02:15.668347] WDIAG [SERVER.OMT] init (ob_cgroup_ctrl.cpp:99) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13][errcode=-4027] init cgroup dir failed(ret=-4027, root_cgroup_="cgroup") [2024-09-13 13:02:15.668361] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.668371] WDIAG [COMMON] get_instance (memory_dump.cpp:106) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10][errcode=-4006] memory dump not init [2024-09-13 13:02:15.668380] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.668412] INFO [COMMON] init (memory_dump.cpp:130) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] pre memory size(sizeof(PreAllocMemory)=12760320) [2024-09-13 13:02:15.674713] INFO init (memory_dump.cpp:151) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(TGDefIDs::MEMORY_DUMP=8, tg_name=memDump) [2024-09-13 13:02:15.674960] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19908][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=141733920768) [2024-09-13 13:02:15.675071] INFO register_pm (ob_page_manager.cpp:40) [19908][][T0][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07a3952340, pm.get_tid()=19908, tenant_id=500) [2024-09-13 13:02:15.675120] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19908][][T0][Y0-0000000000000000-0-0] [lt=46][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.675145] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.675179] INFO [COMMON] get_suitable_bucket_num (ob_kv_storecache.cpp:187) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=26] The ObKVGlobalCache set suitable kvcache buckets(bucket_num=6291469, server_memory_factor=8, reserved_memory=2576980377) [2024-09-13 13:02:15.692504] INFO [COMMON] init (ob_kvcache_store.cpp:97) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=18] ObKVCacheStore init success(max_cache_size=33566535680, block_size=2080768) [2024-09-13 13:02:15.711323] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.775234] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.811418] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.812290] INFO init (ob_kv_storecache.cpp:223) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=28] start tg(lib::TGDefIDs::KVCacheWash=42, tg_name=KVCacheWash) [2024-09-13 13:02:15.812602] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19910][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=146028888064) [2024-09-13 13:02:15.812714] INFO register_pm (ob_page_manager.cpp:40) [19910][][T0][Y0-0000000000000000-0-0] [lt=31] register pm finish(ret=0, &pm=0x2b07a39d0340, pm.get_tid()=19910, tenant_id=500) [2024-09-13 13:02:15.812737] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19910][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.812805] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=28] ObTimer create success(this=0x2b07968ad7f0, thread_id=19910, lbt()=0x24edc06b 0x13836960 0x115a4182 0x1248bfb7 0xb8dee8d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.812821] INFO init (ob_kv_storecache.cpp:225) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] start tg(lib::TGDefIDs::KVCacheRep=43, tg_name=KVCacheRep) [2024-09-13 13:02:15.813130] INFO run1 (ob_timer.cpp:361) [19910][][T0][Y0-0000000000000000-0-0] [lt=10] timer thread started(this=0x2b07968ad7f0, tid=19910, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.813127] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19911][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=150323855360) [2024-09-13 13:02:15.813163] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.813229] INFO register_pm (ob_page_manager.cpp:40) [19911][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07aec56340, pm.get_tid()=19911, tenant_id=500) [2024-09-13 13:02:15.813256] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.813295] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObTimer create success(this=0x2b07968ad960, thread_id=19911, lbt()=0x24edc06b 0x13836960 0x115a4182 0x1248c037 0xb8dee8d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:15.813340] INFO [COMMON] reload_wash_interval (ob_kv_storecache.cpp:818) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] success to reload_wash_interval(wash_interval=200000) [2024-09-13 13:02:15.813362] INFO [COMMON] init (ob_kv_storecache.cpp:252) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] ObKVGlobalCache has been inited!(bucket_num=6291469, max_cache_size=33566535680, block_size=2080768) [2024-09-13 13:02:15.813790] INFO run1 (ob_timer.cpp:361) [19911][][T0][Y0-0000000000000000-0-0] [lt=9] timer thread started(this=0x2b07968ad960, tid=19911, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:15.813812] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.814405] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=15226476118660514297, table={database_id:201001, name_case_mode:2, table_name:"__all_core_table"}, strlen=16) [2024-09-13 13:02:15.814708] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=37] set tenant space table name(key=12077705988849111579, table={database_id:201001, name_case_mode:2, table_name:"__all_table"}, strlen=11) [2024-09-13 13:02:15.814800] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=11727259994073319751, table={database_id:201001, name_case_mode:2, table_name:"__all_column"}, strlen=12) [2024-09-13 13:02:15.814834] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=487700392585173101, table={database_id:201001, name_case_mode:2, table_name:"__all_ddl_operation"}, strlen=19) [2024-09-13 13:02:15.814953] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=3662393253953293139, table={database_id:201001, name_case_mode:2, table_name:"__all_user"}, strlen=10) [2024-09-13 13:02:15.815049] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=10347691945195890309, table={database_id:201001, name_case_mode:2, table_name:"__all_user_history"}, strlen=18) [2024-09-13 13:02:15.815075] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=6478477434419865197, table={database_id:201001, name_case_mode:2, table_name:"__all_database"}, strlen=14) [2024-09-13 13:02:15.815104] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12324969384525918141, table={database_id:201001, name_case_mode:2, table_name:"__all_database_history"}, strlen=22) [2024-09-13 13:02:15.815150] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=6544939996090161411, table={database_id:201001, name_case_mode:2, table_name:"__all_tablegroup"}, strlen=16) [2024-09-13 13:02:15.815190] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9230174574550506209, table={database_id:201001, name_case_mode:2, table_name:"__all_tablegroup_history"}, strlen=24) [2024-09-13 13:02:15.815229] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15076769442941540929, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant"}, strlen=12) [2024-09-13 13:02:15.815265] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2103283252546922553, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_history"}, strlen=20) [2024-09-13 13:02:15.815307] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13279045070842814367, table={database_id:201001, name_case_mode:2, table_name:"__all_table_privilege"}, strlen=21) [2024-09-13 13:02:15.815351] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17681914463146072195, table={database_id:201001, name_case_mode:2, table_name:"__all_table_privilege_history"}, strlen=29) [2024-09-13 13:02:15.815385] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18198147208370910105, table={database_id:201001, name_case_mode:2, table_name:"__all_database_privilege"}, strlen=24) [2024-09-13 13:02:15.815423] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13335089156427463537, table={database_id:201001, name_case_mode:2, table_name:"__all_database_privilege_history"}, strlen=32) [2024-09-13 13:02:15.815626] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8386627974535773695, table={database_id:201001, name_case_mode:2, table_name:"__all_table_history"}, strlen=19) [2024-09-13 13:02:15.815702] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=17940619070668235193, table={database_id:201001, name_case_mode:2, table_name:"__all_column_history"}, strlen=20) [2024-09-13 13:02:15.815729] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] set tenant space table name(key=804168397464531157, table={database_id:201001, name_case_mode:2, table_name:"__all_zone"}, strlen=10) [2024-09-13 13:02:15.815766] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8232794103823547407, table={database_id:201001, name_case_mode:2, table_name:"__all_server"}, strlen=12) [2024-09-13 13:02:15.815805] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6703665805633317969, table={database_id:201001, name_case_mode:2, table_name:"__all_sys_parameter"}, strlen=19) [2024-09-13 13:02:15.815841] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=9851932463369305511, table={database_id:201001, name_case_mode:2, table_name:"__tenant_parameter"}, strlen=18) [2024-09-13 13:02:15.815866] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16642653260683727133, table={database_id:201001, name_case_mode:2, table_name:"__all_sys_variable"}, strlen=18) [2024-09-13 13:02:15.815892] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=13070089431139260123, table={database_id:201001, name_case_mode:2, table_name:"__all_sys_stat"}, strlen=14) [2024-09-13 13:02:15.815923] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9069418420064527651, table={database_id:201001, name_case_mode:2, table_name:"__all_unit"}, strlen=10) [2024-09-13 13:02:15.815951] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=15515320811484657339, table={database_id:201001, name_case_mode:2, table_name:"__all_unit_config"}, strlen=17) [2024-09-13 13:02:15.815976] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5474482723362942009, table={database_id:201001, name_case_mode:2, table_name:"__all_resource_pool"}, strlen=19) [2024-09-13 13:02:15.815994] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5876952487480195497, table={database_id:201001, name_case_mode:2, table_name:"__all_charset"}, strlen=13) [2024-09-13 13:02:15.816012] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1795113162659580005, table={database_id:201001, name_case_mode:2, table_name:"__all_collation"}, strlen=15) [2024-09-13 13:02:15.816038] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=1716155363434330697, table={database_id:201003, name_case_mode:2, table_name:"help_topic"}, strlen=10) [2024-09-13 13:02:15.816054] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8973188859429806243, table={database_id:201003, name_case_mode:2, table_name:"help_category"}, strlen=13) [2024-09-13 13:02:15.816066] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15499975339262925301, table={database_id:201003, name_case_mode:2, table_name:"help_keyword"}, strlen=12) [2024-09-13 13:02:15.816089] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13088092480086163129, table={database_id:201003, name_case_mode:2, table_name:"help_relation"}, strlen=13) [2024-09-13 13:02:15.816101] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18193494207799034211, table={database_id:201001, name_case_mode:2, table_name:"__all_dummy"}, strlen=11) [2024-09-13 13:02:15.816140] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=663992401372972439, table={database_id:201001, name_case_mode:2, table_name:"__all_rootservice_event_history"}, strlen=31) [2024-09-13 13:02:15.816155] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=10039099160382870579, table={database_id:201001, name_case_mode:2, table_name:"__all_privilege"}, strlen=15) [2024-09-13 13:02:15.816205] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3521308379172393619, table={database_id:201001, name_case_mode:2, table_name:"__all_outline"}, strlen=13) [2024-09-13 13:02:15.816252] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=500774069721720243, table={database_id:201001, name_case_mode:2, table_name:"__all_outline_history"}, strlen=21) [2024-09-13 13:02:15.816274] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1042921113981961667, table={database_id:201001, name_case_mode:2, table_name:"__all_recyclebin"}, strlen=16) [2024-09-13 13:02:15.816341] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=16848244930256647885, table={database_id:201001, name_case_mode:2, table_name:"__all_part"}, strlen=10) [2024-09-13 13:02:15.816402] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1367570675294393285, table={database_id:201001, name_case_mode:2, table_name:"__all_part_history"}, strlen=18) [2024-09-13 13:02:15.816456] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13286699829075481941, table={database_id:201001, name_case_mode:2, table_name:"__all_sub_part"}, strlen=14) [2024-09-13 13:02:15.816502] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4602243002968661693, table={database_id:201001, name_case_mode:2, table_name:"__all_sub_part_history"}, strlen=22) [2024-09-13 13:02:15.816555] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17799652704697731175, table={database_id:201001, name_case_mode:2, table_name:"__all_part_info"}, strlen=15) [2024-09-13 13:02:15.816606] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3031936591718982951, table={database_id:201001, name_case_mode:2, table_name:"__all_part_info_history"}, strlen=23) [2024-09-13 13:02:15.816646] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6214301824114323613, table={database_id:201001, name_case_mode:2, table_name:"__all_def_sub_part"}, strlen=18) [2024-09-13 13:02:15.816686] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=15431532259233437141, table={database_id:201001, name_case_mode:2, table_name:"__all_def_sub_part_history"}, strlen=26) [2024-09-13 13:02:15.816723] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14299411459193732821, table={database_id:201001, name_case_mode:2, table_name:"__all_server_event_history"}, strlen=26) [2024-09-13 13:02:15.816769] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4474134102491154269, table={database_id:201001, name_case_mode:2, table_name:"__all_rootservice_job"}, strlen=21) [2024-09-13 13:02:15.816797] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9294579269041697877, table={database_id:201001, name_case_mode:2, table_name:"__all_sys_variable_history"}, strlen=26) [2024-09-13 13:02:15.816821] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=8407162095714011381, table={database_id:201001, name_case_mode:2, table_name:"__all_restore_job"}, strlen=17) [2024-09-13 13:02:15.816914] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15405363549951447867, table={database_id:201001, name_case_mode:2, table_name:"__all_restore_job_history"}, strlen=25) [2024-09-13 13:02:15.816932] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=15126336451296145195, table={database_id:201001, name_case_mode:2, table_name:"__all_ddl_id"}, strlen=12) [2024-09-13 13:02:15.816969] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6556147850475236615, table={database_id:201001, name_case_mode:2, table_name:"__all_foreign_key"}, strlen=17) [2024-09-13 13:02:15.817005] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6003916054558667195, table={database_id:201001, name_case_mode:2, table_name:"__all_foreign_key_history"}, strlen=25) [2024-09-13 13:02:15.817026] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14983513593375415935, table={database_id:201001, name_case_mode:2, table_name:"__all_foreign_key_column"}, strlen=24) [2024-09-13 13:02:15.817048] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=17259218636947690865, table={database_id:201001, name_case_mode:2, table_name:"__all_foreign_key_column_history"}, strlen=32) [2024-09-13 13:02:15.817074] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3369309189063807079, table={database_id:201001, name_case_mode:2, table_name:"__all_synonym"}, strlen=13) [2024-09-13 13:02:15.817098] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6164942491782496947, table={database_id:201001, name_case_mode:2, table_name:"__all_synonym_history"}, strlen=21) [2024-09-13 13:02:15.817119] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15707315468013655865, table={database_id:201001, name_case_mode:2, table_name:"__all_auto_increment"}, strlen=20) [2024-09-13 13:02:15.817150] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15481048351588947741, table={database_id:201001, name_case_mode:2, table_name:"__all_ddl_checksum"}, strlen=18) [2024-09-13 13:02:15.817193] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2966663788907865603, table={database_id:201001, name_case_mode:2, table_name:"__all_routine"}, strlen=13) [2024-09-13 13:02:15.817232] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12467473665640905523, table={database_id:201001, name_case_mode:2, table_name:"__all_routine_history"}, strlen=21) [2024-09-13 13:02:15.817289] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4525265300527736011, table={database_id:201001, name_case_mode:2, table_name:"__all_routine_param"}, strlen=19) [2024-09-13 13:02:15.817336] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1218314858947921743, table={database_id:201001, name_case_mode:2, table_name:"__all_routine_param_history"}, strlen=27) [2024-09-13 13:02:15.817367] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4625667915141197935, table={database_id:201001, name_case_mode:2, table_name:"__all_package"}, strlen=13) [2024-09-13 13:02:15.817402] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9350636482973666035, table={database_id:201001, name_case_mode:2, table_name:"__all_package_history"}, strlen=21) [2024-09-13 13:02:15.817424] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=998393027456732053, table={database_id:201001, name_case_mode:2, table_name:"__all_acquired_snapshot"}, strlen=23) [2024-09-13 13:02:15.817458] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5556373293997605449, table={database_id:201001, name_case_mode:2, table_name:"__all_constraint"}, strlen=16) [2024-09-13 13:02:15.817489] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11475804357484972513, table={database_id:201001, name_case_mode:2, table_name:"__all_constraint_history"}, strlen=24) [2024-09-13 13:02:15.817510] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=14671751299176157395, table={database_id:201001, name_case_mode:2, table_name:"__all_ori_schema_version"}, strlen=24) [2024-09-13 13:02:15.817530] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=14111330992233147957, table={database_id:201001, name_case_mode:2, table_name:"__all_func"}, strlen=10) [2024-09-13 13:02:15.817554] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8118547034481856581, table={database_id:201001, name_case_mode:2, table_name:"__all_func_history"}, strlen=18) [2024-09-13 13:02:15.817568] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1110452371895440697, table={database_id:201001, name_case_mode:2, table_name:"__all_temp_table"}, strlen=16) [2024-09-13 13:02:15.817607] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=8706279734557067381, table={database_id:201001, name_case_mode:2, table_name:"__all_sequence_object"}, strlen=21) [2024-09-13 13:02:15.817640] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=311492729514872259, table={database_id:201001, name_case_mode:2, table_name:"__all_sequence_object_history"}, strlen=29) [2024-09-13 13:02:15.817656] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16250243702109460225, table={database_id:201001, name_case_mode:2, table_name:"__all_sequence_value"}, strlen=20) [2024-09-13 13:02:15.817673] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3721853868477607205, table={database_id:201001, name_case_mode:2, table_name:"__all_freeze_schema_version"}, strlen=27) [2024-09-13 13:02:15.817716] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11087628166864537917, table={database_id:201001, name_case_mode:2, table_name:"__all_type"}, strlen=10) [2024-09-13 13:02:15.817758] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12680554093421879877, table={database_id:201001, name_case_mode:2, table_name:"__all_type_history"}, strlen=18) [2024-09-13 13:02:15.817802] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9536818930262141557, table={database_id:201001, name_case_mode:2, table_name:"__all_type_attr"}, strlen=15) [2024-09-13 13:02:15.817847] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=9185925756457514023, table={database_id:201001, name_case_mode:2, table_name:"__all_type_attr_history"}, strlen=23) [2024-09-13 13:02:15.817891] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9408247421213979999, table={database_id:201001, name_case_mode:2, table_name:"__all_coll_type"}, strlen=15) [2024-09-13 13:02:15.817930] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7521902980132874023, table={database_id:201001, name_case_mode:2, table_name:"__all_coll_type_history"}, strlen=23) [2024-09-13 13:02:15.817951] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13388842482819813155, table={database_id:201001, name_case_mode:2, table_name:"__all_weak_read_service"}, strlen=23) [2024-09-13 13:02:15.818011] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=265138247334288457, table={database_id:201001, name_case_mode:2, table_name:"__all_dblink"}, strlen=12) [2024-09-13 13:02:15.818068] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6709520633702190649, table={database_id:201001, name_case_mode:2, table_name:"__all_dblink_history"}, strlen=20) [2024-09-13 13:02:15.818086] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2028679341034938537, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_role_grantee_map"}, strlen=29) [2024-09-13 13:02:15.818108] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=1031911103916912531, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_role_grantee_map_history"}, strlen=37) [2024-09-13 13:02:15.818130] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4910421004887031027, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_keystore"}, strlen=21) [2024-09-13 13:02:15.818157] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=5332820740432672259, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_keystore_history"}, strlen=29) [2024-09-13 13:02:15.818183] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13117474934449926091, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_policy"}, strlen=23) [2024-09-13 13:02:15.818208] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18082021277245346327, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_policy_history"}, strlen=31) [2024-09-13 13:02:15.818232] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1724644558237091093, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_component"}, strlen=26) [2024-09-13 13:02:15.818269] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14980706431689169701, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_component_history"}, strlen=34) [2024-09-13 13:02:15.818290] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=15601369921806167451, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_label"}, strlen=22) [2024-09-13 13:02:15.818314] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=266144608167419341, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_label_history"}, strlen=30) [2024-09-13 13:02:15.818338] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1896378858718403101, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_user_level"}, strlen=27) [2024-09-13 13:02:15.818365] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7358788553473219103, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_ols_user_level_history"}, strlen=35) [2024-09-13 13:02:15.818386] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5673522745085988803, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_tablespace"}, strlen=23) [2024-09-13 13:02:15.818410] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15882626076296079127, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_tablespace_history"}, strlen=31) [2024-09-13 13:02:15.818441] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13665960389019087645, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_user_failed_login_stat"}, strlen=35) [2024-09-13 13:02:15.818475] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6409691859953470681, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_profile"}, strlen=20) [2024-09-13 13:02:15.818508] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1398456045760357577, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_profile_history"}, strlen=28) [2024-09-13 13:02:15.818530] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8629536137880323733, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_security_audit"}, strlen=27) [2024-09-13 13:02:15.818556] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10566526679905020831, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_security_audit_history"}, strlen=35) [2024-09-13 13:02:15.818627] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3533070307346278799, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_trigger"}, strlen=20) [2024-09-13 13:02:15.818692] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3200646983440820425, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_trigger_history"}, strlen=28) [2024-09-13 13:02:15.818725] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16301959326055699619, table={database_id:201001, name_case_mode:2, table_name:"__all_seed_parameter"}, strlen=20) [2024-09-13 13:02:15.818834] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14514248590458868153, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_security_audit_record"}, strlen=34) [2024-09-13 13:02:15.818852] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=16525368402904849221, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_sysauth"}, strlen=20) [2024-09-13 13:02:15.818884] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=2505673765478814921, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_sysauth_history"}, strlen=28) [2024-09-13 13:02:15.818908] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11575954516769260421, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_objauth"}, strlen=20) [2024-09-13 13:02:15.818934] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4245225539433991177, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_objauth_history"}, strlen=28) [2024-09-13 13:02:15.818953] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=9257112491128637401, table={database_id:201001, name_case_mode:2, table_name:"__all_restore_info"}, strlen=18) [2024-09-13 13:02:15.818993] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=720157880997668007, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_error"}, strlen=18) [2024-09-13 13:02:15.819026] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=17852559657101847729, table={database_id:201001, name_case_mode:2, table_name:"__all_restore_progress"}, strlen=22) [2024-09-13 13:02:15.819078] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=7678268654218451217, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_object_type"}, strlen=24) [2024-09-13 13:02:15.819124] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13572145316021120241, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_object_type_history"}, strlen=32) [2024-09-13 13:02:15.819141] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15486192117395989245, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_time_zone"}, strlen=22) [2024-09-13 13:02:15.819157] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17905961867763276171, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_time_zone_name"}, strlen=27) [2024-09-13 13:02:15.819175] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8880680013683945841, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_time_zone_transition"}, strlen=33) [2024-09-13 13:02:15.819200] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=8725935798345421989, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_time_zone_transition_type"}, strlen=38) [2024-09-13 13:02:15.819221] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14941346610507177611, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_constraint_column"}, strlen=30) [2024-09-13 13:02:15.819252] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9732128688555474845, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_constraint_column_history"}, strlen=38) [2024-09-13 13:02:15.819291] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=12313307187600527045, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_global_transaction"}, strlen=31) [2024-09-13 13:02:15.819329] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6535267976628650171, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_dependency"}, strlen=23) [2024-09-13 13:02:15.819348] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2683210741194592195, table={database_id:201001, name_case_mode:2, table_name:"__all_res_mgr_plan"}, strlen=18) [2024-09-13 13:02:15.819371] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=321165096716538023, table={database_id:201001, name_case_mode:2, table_name:"__all_res_mgr_directive"}, strlen=23) [2024-09-13 13:02:15.819392] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=640502581641158861, table={database_id:201001, name_case_mode:2, table_name:"__all_res_mgr_mapping_rule"}, strlen=26) [2024-09-13 13:02:15.819428] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14745175144824914563, table={database_id:201001, name_case_mode:2, table_name:"__all_ddl_error_message"}, strlen=23) [2024-09-13 13:02:15.819457] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=14712446561922070599, table={database_id:201001, name_case_mode:2, table_name:"__all_space_usage"}, strlen=17) [2024-09-13 13:02:15.819548] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=3299497049567549729, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_set_files"}, strlen=22) [2024-09-13 13:02:15.819565] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18182191731230995467, table={database_id:201001, name_case_mode:2, table_name:"__all_res_mgr_consumer_group"}, strlen=28) [2024-09-13 13:02:15.819581] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13923207293091690251, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_info"}, strlen=17) [2024-09-13 13:02:15.819615] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5788276983515769643, table={database_id:201001, name_case_mode:2, table_name:"__all_ddl_task_status"}, strlen=21) [2024-09-13 13:02:15.819635] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14345458131120366555, table={database_id:201001, name_case_mode:2, table_name:"__all_region_network_bandwidth_limit"}, strlen=36) [2024-09-13 13:02:15.819680] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17893919763894948745, table={database_id:201001, name_case_mode:2, table_name:"__all_deadlock_event_history"}, strlen=28) [2024-09-13 13:02:15.819721] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15938259562735129653, table={database_id:201001, name_case_mode:2, table_name:"__all_column_usage"}, strlen=18) [2024-09-13 13:02:15.819770] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2603469821525028165, table={database_id:201001, name_case_mode:2, table_name:"__all_job"}, strlen=9) [2024-09-13 13:02:15.819789] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9677407561020623763, table={database_id:201001, name_case_mode:2, table_name:"__all_job_log"}, strlen=13) [2024-09-13 13:02:15.819806] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18299181273089282109, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_directory"}, strlen=22) [2024-09-13 13:02:15.819832] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=3615909664776853197, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_directory_history"}, strlen=30) [2024-09-13 13:02:15.819891] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13203984404557063111, table={database_id:201001, name_case_mode:2, table_name:"__all_table_stat"}, strlen=16) [2024-09-13 13:02:15.819948] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13593456723596311753, table={database_id:201001, name_case_mode:2, table_name:"__all_column_stat"}, strlen=17) [2024-09-13 13:02:15.819977] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=2033987196978381727, table={database_id:201001, name_case_mode:2, table_name:"__all_histogram_stat"}, strlen=20) [2024-09-13 13:02:15.820006] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10523643211813031695, table={database_id:201001, name_case_mode:2, table_name:"__all_monitor_modified"}, strlen=22) [2024-09-13 13:02:15.820050] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17786317888840441505, table={database_id:201001, name_case_mode:2, table_name:"__all_table_stat_history"}, strlen=24) [2024-09-13 13:02:15.820102] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17844106146146067643, table={database_id:201001, name_case_mode:2, table_name:"__all_column_stat_history"}, strlen=25) [2024-09-13 13:02:15.820140] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=515416535842957513, table={database_id:201001, name_case_mode:2, table_name:"__all_histogram_stat_history"}, strlen=28) [2024-09-13 13:02:15.820164] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6256918342980536965, table={database_id:201001, name_case_mode:2, table_name:"__all_optstat_global_prefs"}, strlen=26) [2024-09-13 13:02:15.820185] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1005508210486894529, table={database_id:201001, name_case_mode:2, table_name:"__all_optstat_user_prefs"}, strlen=24) [2024-09-13 13:02:15.820235] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=6155439119741924235, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_meta_table"}, strlen=19) [2024-09-13 13:02:15.820255] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6876489288212272485, table={database_id:201001, name_case_mode:2, table_name:"__all_tablet_to_ls"}, strlen=18) [2024-09-13 13:02:15.820282] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12164389443361469299, table={database_id:201001, name_case_mode:2, table_name:"__all_tablet_meta_table"}, strlen=23) [2024-09-13 13:02:15.820323] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5092449924229211087, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_status"}, strlen=15) [2024-09-13 13:02:15.820375] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9921150920871154761, table={database_id:201001, name_case_mode:2, table_name:"__all_log_archive_progress"}, strlen=26) [2024-09-13 13:02:15.820417] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11667710272360144827, table={database_id:201001, name_case_mode:2, table_name:"__all_log_archive_history"}, strlen=25) [2024-09-13 13:02:15.820482] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14771177362653768571, table={database_id:201001, name_case_mode:2, table_name:"__all_log_archive_piece_files"}, strlen=29) [2024-09-13 13:02:15.820529] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=11153776165287517939, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_log_archive_progress"}, strlen=29) [2024-09-13 13:02:15.820548] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=4840667780182692961, table={database_id:201001, name_case_mode:2, table_name:"__all_ls"}, strlen=8) [2024-09-13 13:02:15.820574] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5871136232436001723, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_storage_info"}, strlen=25) [2024-09-13 13:02:15.820592] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5253726504995511709, table={database_id:201001, name_case_mode:2, table_name:"__all_dam_last_arch_ts"}, strlen=22) [2024-09-13 13:02:15.820622] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=6541041578174273069, table={database_id:201001, name_case_mode:2, table_name:"__all_dam_cleanup_jobs"}, strlen=22) [2024-09-13 13:02:15.820667] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14454558605933543723, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_job"}, strlen=16) [2024-09-13 13:02:15.820709] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=345820062373353505, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_job_history"}, strlen=24) [2024-09-13 13:02:15.820765] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7038381903360144763, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_task"}, strlen=17) [2024-09-13 13:02:15.820819] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17241731791169823227, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_task_history"}, strlen=25) [2024-09-13 13:02:15.820891] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5139340338441356893, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_ls_task"}, strlen=20) [2024-09-13 13:02:15.820953] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17112232417168141001, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_ls_task_history"}, strlen=28) [2024-09-13 13:02:15.820993] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5326619424660546683, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_ls_task_info"}, strlen=25) [2024-09-13 13:02:15.821019] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10809465711872964189, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_skipped_tablet"}, strlen=27) [2024-09-13 13:02:15.821042] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7172672809704535647, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_skipped_tablet_history"}, strlen=35) [2024-09-13 13:02:15.821078] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=8551986651507774603, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_info"}, strlen=17) [2024-09-13 13:02:15.821095] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16969434147253049475, table={database_id:201001, name_case_mode:2, table_name:"__all_tablet_to_table_history"}, strlen=29) [2024-09-13 13:02:15.821118] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6966337288252588875, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_recovery_stat"}, strlen=22) [2024-09-13 13:02:15.821155] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7672455471652392683, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_ls_task_info_history"}, strlen=33) [2024-09-13 13:02:15.821184] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1718272479064774183, table={database_id:201001, name_case_mode:2, table_name:"__all_tablet_replica_checksum"}, strlen=29) [2024-09-13 13:02:15.821206] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9277721814065941751, table={database_id:201001, name_case_mode:2, table_name:"__all_tablet_checksum"}, strlen=21) [2024-09-13 13:02:15.821255] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1410158923608136531, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_replica_task"}, strlen=21) [2024-09-13 13:02:15.821286] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13552177719164551521, table={database_id:201001, name_case_mode:2, table_name:"__all_pending_transaction"}, strlen=25) [2024-09-13 13:02:15.821308] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7251939519579911917, table={database_id:201001, name_case_mode:2, table_name:"__all_balance_group_ls_stat"}, strlen=27) [2024-09-13 13:02:15.821401] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14672553900982078935, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_scheduler_job"}, strlen=26) [2024-09-13 13:02:15.821426] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=11082355209028953489, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_scheduler_job_run_detail"}, strlen=37) [2024-09-13 13:02:15.821474] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4332661699714066125, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_scheduler_program"}, strlen=30) [2024-09-13 13:02:15.821513] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=739950597555994075, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_scheduler_program_argument"}, strlen=39) [2024-09-13 13:02:15.821537] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6633247198031189931, table={database_id:201001, name_case_mode:2, table_name:"__all_context"}, strlen=13) [2024-09-13 13:02:15.821564] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=454551930966311923, table={database_id:201001, name_case_mode:2, table_name:"__all_context_history"}, strlen=21) [2024-09-13 13:02:15.821588] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13822394798323643845, table={database_id:201001, name_case_mode:2, table_name:"__all_global_context_value"}, strlen=26) [2024-09-13 13:02:15.821607] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2442104316564107661, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_election_reference_info"}, strlen=32) [2024-09-13 13:02:15.821653] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2023004161289107105, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_job"}, strlen=23) [2024-09-13 13:02:15.821698] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9418129896431642007, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_job_history"}, strlen=31) [2024-09-13 13:02:15.821744] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=16360320756335183541, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_task"}, strlen=24) [2024-09-13 13:02:15.821793] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] set tenant space table name(key=5641975476073571633, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_task_history"}, strlen=32) [2024-09-13 13:02:15.821834] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17591288669660113079, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_ls_task"}, strlen=27) [2024-09-13 13:02:15.821881] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3689768364663215327, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_ls_task_history"}, strlen=35) [2024-09-13 13:02:15.821910] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=12242147598090118883, table={database_id:201001, name_case_mode:2, table_name:"__all_zone_merge_info"}, strlen=21) [2024-09-13 13:02:15.821938] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8265828946223808621, table={database_id:201001, name_case_mode:2, table_name:"__all_merge_info"}, strlen=16) [2024-09-13 13:02:15.821953] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17916265507013964363, table={database_id:201001, name_case_mode:2, table_name:"__all_freeze_info"}, strlen=17) [2024-09-13 13:02:15.821976] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=12386362653016066913, table={database_id:201001, name_case_mode:2, table_name:"__all_disk_io_calibration"}, strlen=25) [2024-09-13 13:02:15.821994] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=106569157966755327, table={database_id:201001, name_case_mode:2, table_name:"__all_plan_baseline"}, strlen=19) [2024-09-13 13:02:15.822039] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6856469588069932585, table={database_id:201001, name_case_mode:2, table_name:"__all_plan_baseline_item"}, strlen=24) [2024-09-13 13:02:15.822055] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=737016062760421533, table={database_id:201001, name_case_mode:2, table_name:"__all_spm_config"}, strlen=16) [2024-09-13 13:02:15.822077] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=385292018024404667, table={database_id:201001, name_case_mode:2, table_name:"__all_log_archive_dest_parameter"}, strlen=32) [2024-09-13 13:02:15.822095] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=750388973936519071, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_parameter"}, strlen=22) [2024-09-13 13:02:15.822129] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2064434701400261915, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_restore_progress"}, strlen=25) [2024-09-13 13:02:15.822163] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5983074107095964001, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_restore_history"}, strlen=24) [2024-09-13 13:02:15.822188] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15195981732091019563, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_storage_info_history"}, strlen=33) [2024-09-13 13:02:15.822214] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=17225039833591821085, table={database_id:201001, name_case_mode:2, table_name:"__all_backup_delete_policy"}, strlen=26) [2024-09-13 13:02:15.822232] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=10618024175747124645, table={database_id:201001, name_case_mode:2, table_name:"__all_mock_fk_parent_table"}, strlen=26) [2024-09-13 13:02:15.822252] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4096373257849981925, table={database_id:201001, name_case_mode:2, table_name:"__all_mock_fk_parent_table_history"}, strlen=34) [2024-09-13 13:02:15.822270] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15042892075467936941, table={database_id:201001, name_case_mode:2, table_name:"__all_mock_fk_parent_table_column"}, strlen=33) [2024-09-13 13:02:15.822291] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6266905018957919515, table={database_id:201001, name_case_mode:2, table_name:"__all_mock_fk_parent_table_column_history"}, strlen=41) [2024-09-13 13:02:15.822314] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=1626377680216083193, table={database_id:201001, name_case_mode:2, table_name:"__all_log_restore_source"}, strlen=24) [2024-09-13 13:02:15.822349] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13586415458395150971, table={database_id:201001, name_case_mode:2, table_name:"__all_kv_ttl_task"}, strlen=17) [2024-09-13 13:02:15.822388] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10315139639212232443, table={database_id:201001, name_case_mode:2, table_name:"__all_kv_ttl_task_history"}, strlen=25) [2024-09-13 13:02:15.822404] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15060160636423438025, table={database_id:201001, name_case_mode:2, table_name:"__all_service_epoch"}, strlen=19) [2024-09-13 13:02:15.822453] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14207301601487712751, table={database_id:201001, name_case_mode:2, table_name:"__all_spatial_reference_systems"}, strlen=31) [2024-09-13 13:02:15.822484] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1427218845995887213, table={database_id:201001, name_case_mode:2, table_name:"__all_column_checksum_error_info"}, strlen=32) [2024-09-13 13:02:15.822527] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=536232206730337799, table={database_id:201001, name_case_mode:2, table_name:"__all_transfer_task"}, strlen=19) [2024-09-13 13:02:15.822572] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5753606698620705679, table={database_id:201001, name_case_mode:2, table_name:"__all_transfer_task_history"}, strlen=27) [2024-09-13 13:02:15.822595] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9898996633246390741, table={database_id:201001, name_case_mode:2, table_name:"__all_balance_job"}, strlen=17) [2024-09-13 13:02:15.822622] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=12994895240210476603, table={database_id:201001, name_case_mode:2, table_name:"__all_balance_job_history"}, strlen=25) [2024-09-13 13:02:15.822659] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7598411896746562089, table={database_id:201001, name_case_mode:2, table_name:"__all_balance_task"}, strlen=18) [2024-09-13 13:02:15.822703] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=18315806932585433045, table={database_id:201001, name_case_mode:2, table_name:"__all_balance_task_history"}, strlen=26) [2024-09-13 13:02:15.822721] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2726033030194402551, table={database_id:201001, name_case_mode:2, table_name:"__all_arbitration_service"}, strlen=25) [2024-09-13 13:02:15.822742] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] set tenant space table name(key=5367016885783802827, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_arb_replica_task"}, strlen=25) [2024-09-13 13:02:15.822758] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17864334880061719717, table={database_id:201001, name_case_mode:2, table_name:"__all_data_dictionary_in_log"}, strlen=28) [2024-09-13 13:02:15.822788] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10088403076917926571, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_arb_replica_task_history"}, strlen=33) [2024-09-13 13:02:15.822817] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14641783777040638849, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_policy"}, strlen=16) [2024-09-13 13:02:15.822848] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15309866415396578593, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_policy_history"}, strlen=24) [2024-09-13 13:02:15.822865] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=736295740555706301, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_security_column"}, strlen=25) [2024-09-13 13:02:15.822889] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=9614448622504189163, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_security_column_history"}, strlen=33) [2024-09-13 13:02:15.822907] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10604796654752325437, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_group"}, strlen=15) [2024-09-13 13:02:15.822932] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=2287975951870720615, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_group_history"}, strlen=23) [2024-09-13 13:02:15.822951] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3384713904088717219, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_context"}, strlen=17) [2024-09-13 13:02:15.822972] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10516942058551462715, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_context_history"}, strlen=25) [2024-09-13 13:02:15.822990] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4796164509649626743, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_attribute"}, strlen=19) [2024-09-13 13:02:15.823011] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=17118269223872947663, table={database_id:201001, name_case_mode:2, table_name:"__all_rls_attribute_history"}, strlen=27) [2024-09-13 13:02:15.823048] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6303018733342293705, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_rewrite_rules"}, strlen=26) [2024-09-13 13:02:15.823068] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1505072643534354837, table={database_id:201001, name_case_mode:2, table_name:"__all_reserved_snapshot"}, strlen=23) [2024-09-13 13:02:15.823098] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] set tenant space table name(key=3206000261011698383, table={database_id:201001, name_case_mode:2, table_name:"__all_cluster_event_history"}, strlen=27) [2024-09-13 13:02:15.823116] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] set tenant space table name(key=2941245638296296215, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_transfer_member_list_lock_info"}, strlen=39) [2024-09-13 13:02:15.823134] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11234993593640609995, table={database_id:201001, name_case_mode:2, table_name:"__all_external_table_file"}, strlen=25) [2024-09-13 13:02:15.823170] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=7976223127122149093, table={database_id:201001, name_case_mode:2, table_name:"__all_task_opt_stat_gather_history"}, strlen=34) [2024-09-13 13:02:15.823211] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14262268536927770655, table={database_id:201001, name_case_mode:2, table_name:"__all_table_opt_stat_gather_history"}, strlen=35) [2024-09-13 13:02:15.823301] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17419416618465201167, table={database_id:201001, name_case_mode:2, table_name:"__wr_active_session_history"}, strlen=27) [2024-09-13 13:02:15.823324] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=8648431613149611097, table={database_id:201001, name_case_mode:2, table_name:"__wr_snapshot"}, strlen=13) [2024-09-13 13:02:15.823339] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=10924522129631307255, table={database_id:201001, name_case_mode:2, table_name:"__wr_statname"}, strlen=13) [2024-09-13 13:02:15.823356] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1926239589368122255, table={database_id:201001, name_case_mode:2, table_name:"__wr_sysstat"}, strlen=12) [2024-09-13 13:02:15.823375] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14044128492918867653, table={database_id:201001, name_case_mode:2, table_name:"__all_balance_task_helper"}, strlen=25) [2024-09-13 13:02:15.823395] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10606982525096515081, table={database_id:201001, name_case_mode:2, table_name:"__all_dbms_lock_allocated"}, strlen=25) [2024-09-13 13:02:15.823417] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13551619464768476839, table={database_id:201001, name_case_mode:2, table_name:"__wr_control"}, strlen=12) [2024-09-13 13:02:15.823466] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12273043209952030165, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_event_history"}, strlen=26) [2024-09-13 13:02:15.823494] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=2156128032411121717, table={database_id:201001, name_case_mode:2, table_name:"__all_tenant_scheduler_job_class"}, strlen=32) [2024-09-13 13:02:15.823577] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=16402562115065233409, table={database_id:201001, name_case_mode:2, table_name:"__all_recover_table_job"}, strlen=23) [2024-09-13 13:02:15.823654] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=322676203845648151, table={database_id:201001, name_case_mode:2, table_name:"__all_recover_table_job_history"}, strlen=31) [2024-09-13 13:02:15.823731] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4057484502251261055, table={database_id:201001, name_case_mode:2, table_name:"__all_import_table_job"}, strlen=22) [2024-09-13 13:02:15.823805] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10256335000019078605, table={database_id:201001, name_case_mode:2, table_name:"__all_import_table_job_history"}, strlen=30) [2024-09-13 13:02:15.823891] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15253123702236863135, table={database_id:201001, name_case_mode:2, table_name:"__all_import_table_task"}, strlen=23) [2024-09-13 13:02:15.823966] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=8857254671380765655, table={database_id:201001, name_case_mode:2, table_name:"__all_import_table_task_history"}, strlen=31) [2024-09-13 13:02:15.823997] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16250367188041164807, table={database_id:201001, name_case_mode:2, table_name:"__all_storage_ha_error_diagnose_history"}, strlen=39) [2024-09-13 13:02:15.824030] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11398066368870582173, table={database_id:201001, name_case_mode:2, table_name:"__all_storage_ha_perf_diagnose_history"}, strlen=38) [2024-09-13 13:02:15.824053] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9688623159790226459, table={database_id:201001, name_case_mode:2, table_name:"__wr_system_event"}, strlen=17) [2024-09-13 13:02:15.824075] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=11650360314263353923, table={database_id:201001, name_case_mode:2, table_name:"__wr_event_name"}, strlen=15) [2024-09-13 13:02:15.824102] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15856097785690596323, table={database_id:201001, name_case_mode:2, table_name:"__all_routine_privilege"}, strlen=23) [2024-09-13 13:02:15.824129] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4716672932354326487, table={database_id:201001, name_case_mode:2, table_name:"__all_routine_privilege_history"}, strlen=31) [2024-09-13 13:02:15.824243] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17999792226030913103, table={database_id:201001, name_case_mode:2, table_name:"__wr_sqlstat"}, strlen=12) [2024-09-13 13:02:15.824265] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=10701076660877348721, table={database_id:201001, name_case_mode:2, table_name:"__all_ncomp_dll"}, strlen=15) [2024-09-13 13:02:15.824286] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10392657045332137307, table={database_id:201001, name_case_mode:2, table_name:"__all_aux_stat"}, strlen=14) [2024-09-13 13:02:15.824328] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15623804334560284769, table={database_id:201001, name_case_mode:2, table_name:"__all_index_usage_info"}, strlen=22) [2024-09-13 13:02:15.824351] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5410973579032597443, table={database_id:201001, name_case_mode:2, table_name:"__all_transfer_partition_task"}, strlen=29) [2024-09-13 13:02:15.824376] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1460391870692694099, table={database_id:201001, name_case_mode:2, table_name:"__all_transfer_partition_task_history"}, strlen=37) [2024-09-13 13:02:15.824396] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1117220808036644773, table={database_id:201001, name_case_mode:2, table_name:"__wr_sqltext"}, strlen=12) [2024-09-13 13:02:15.824414] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2930611376442922651, table={database_id:201001, name_case_mode:2, table_name:"__all_audit_log_filter"}, strlen=22) [2024-09-13 13:02:15.824445] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=15228694239222102799, table={database_id:201001, name_case_mode:2, table_name:"__all_audit_log_user"}, strlen=20) [2024-09-13 13:02:15.824475] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3393161370099407981, table={database_id:201001, name_case_mode:2, table_name:"__all_column_privilege"}, strlen=22) [2024-09-13 13:02:15.824500] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8394810126509044877, table={database_id:201001, name_case_mode:2, table_name:"__all_column_privilege_history"}, strlen=30) [2024-09-13 13:02:15.824549] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=109545231548433475, table={database_id:201001, name_case_mode:2, table_name:"__all_ls_replica_task_history"}, strlen=29) [2024-09-13 13:02:15.824568] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5359467319832074531, table={database_id:201001, name_case_mode:2, table_name:"__all_user_proxy_info"}, strlen=21) [2024-09-13 13:02:15.824590] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=4527303895030807491, table={database_id:201001, name_case_mode:2, table_name:"__all_user_proxy_info_history"}, strlen=29) [2024-09-13 13:02:15.824607] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14818930701916259273, table={database_id:201001, name_case_mode:2, table_name:"__all_user_proxy_role_info"}, strlen=26) [2024-09-13 13:02:15.824626] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14174244267723549669, table={database_id:201001, name_case_mode:2, table_name:"__all_user_proxy_role_info_history"}, strlen=34) [2024-09-13 13:02:15.824642] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17487649412344447135, table={database_id:201001, name_case_mode:2, table_name:"__all_service"}, strlen=13) [2024-09-13 13:02:15.824694] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3276578535233262435, table={database_id:201001, name_case_mode:2, table_name:"__all_scheduler_job_run_detail_v2"}, strlen=33) [2024-09-13 13:02:15.824743] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=17045071868751366117, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_all_table"}, strlen=26) [2024-09-13 13:02:15.824767] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5519398100204870277, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_table_column"}, strlen=29) [2024-09-13 13:02:15.824810] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16117044152472803067, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_table_index"}, strlen=28) [2024-09-13 13:02:15.824824] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16150779124246126639, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_create_database"}, strlen=37) [2024-09-13 13:02:15.824841] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11884990978553673621, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_create_table"}, strlen=34) [2024-09-13 13:02:15.824854] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2387169672638203455, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_session_variable"}, strlen=33) [2024-09-13 13:02:15.824865] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=956164978035976065, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_privilege_grant"}, strlen=32) [2024-09-13 13:02:15.825003] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=10723563407649040877, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_processlist"}, strlen=25) [2024-09-13 13:02:15.825022] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14601271136071318673, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_warning"}, strlen=24) [2024-09-13 13:02:15.825036] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9823273726776516055, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_current_tenant"}, strlen=31) [2024-09-13 13:02:15.825058] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=9449662593763233013, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_database_status"}, strlen=32) [2024-09-13 13:02:15.825074] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17889888313341552945, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_tenant_status"}, strlen=30) [2024-09-13 13:02:15.825092] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=81606536831860655, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_statname"}, strlen=25) [2024-09-13 13:02:15.825114] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15043369427170900939, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_event_name"}, strlen=27) [2024-09-13 13:02:15.825125] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18255339938544476937, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_global_variable"}, strlen=32) [2024-09-13 13:02:15.825136] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] set tenant space table name(key=7382965831014104909, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_tables"}, strlen=28) [2024-09-13 13:02:15.825157] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5823639658674059037, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_create_procedure"}, strlen=38) [2024-09-13 13:02:15.825195] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12188669583906781559, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_core_meta_table"}, strlen=29) [2024-09-13 13:02:15.825301] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15768093327296574529, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_plan_cache_stat"}, strlen=29) [2024-09-13 13:02:15.825452] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1929560700166330149, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_plan_stat"}, strlen=23) [2024-09-13 13:02:15.825480] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=10849111213766514175, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mem_leak_checker_info"}, strlen=35) [2024-09-13 13:02:15.825514] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12534175108322031989, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_latch"}, strlen=19) [2024-09-13 13:02:15.825548] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17946072786506275721, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_kvcache_info"}, strlen=26) [2024-09-13 13:02:15.825561] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1148958555621586483, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_data_type_class"}, strlen=29) [2024-09-13 13:02:15.825573] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7870574698325049231, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_data_type"}, strlen=23) [2024-09-13 13:02:15.825616] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2991237636107603135, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_session_event"}, strlen=27) [2024-09-13 13:02:15.825655] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13231924137069186047, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_session_wait"}, strlen=26) [2024-09-13 13:02:15.825691] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13614578644635299749, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_session_wait_history"}, strlen=34) [2024-09-13 13:02:15.825722] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9602714405924544917, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_system_event"}, strlen=26) [2024-09-13 13:02:15.825744] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18315226002269187897, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_memstore_info"}, strlen=34) [2024-09-13 13:02:15.825773] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=18170684266772089053, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_concurrency_object_pool"}, strlen=37) [2024-09-13 13:02:15.825794] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=792101162394988209, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sesstat"}, strlen=21) [2024-09-13 13:02:15.825818] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=2596156483016570673, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sysstat"}, strlen=21) [2024-09-13 13:02:15.825839] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8571893326947715237, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_disk_stat"}, strlen=23) [2024-09-13 13:02:15.825899] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13122091054986245039, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_memstore_info"}, strlen=27) [2024-09-13 13:02:15.825911] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15259231909277627059, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_upgrade_inspection"}, strlen=32) [2024-09-13 13:02:15.825991] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=1511382916582423927, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trans_stat"}, strlen=24) [2024-09-13 13:02:15.826015] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=12516260303390182231, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trans_ctx_mgr_stat"}, strlen=32) [2024-09-13 13:02:15.826062] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18194554554044506693, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trans_scheduler"}, strlen=29) [2024-09-13 13:02:15.826324] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18341810671452752685, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_audit"}, strlen=23) [2024-09-13 13:02:15.826561] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=17183208702325076817, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_core_all_table"}, strlen=28) [2024-09-13 13:02:15.826657] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] set tenant space table name(key=1978990099111001603, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_core_column_table"}, strlen=31) [2024-09-13 13:02:15.826693] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=5356809869350394683, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_memory_info"}, strlen=25) [2024-09-13 13:02:15.826731] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2426326948649033239, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sys_parameter_stat"}, strlen=32) [2024-09-13 13:02:15.826763] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2756782331778382803, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trace_span_info"}, strlen=29) [2024-09-13 13:02:15.826781] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16901204961618683353, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_engine"}, strlen=20) [2024-09-13 13:02:15.826797] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4052037019246478517, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_server_stat"}, strlen=31) [2024-09-13 13:02:15.826814] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1591733027118987337, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_sys_variable"}, strlen=32) [2024-09-13 13:02:15.826865] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6049031990595067573, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_schema"}, strlen=26) [2024-09-13 13:02:15.826899] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=4228542629693930061, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_plan_cache_plan_explain"}, strlen=37) [2024-09-13 13:02:15.826959] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3696554365402450999, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_obrpc_stat"}, strlen=24) [2024-09-13 13:02:15.826997] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=9895626175744829857, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_outline"}, strlen=24) [2024-09-13 13:02:15.827020] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17674221193126572289, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_concurrent_limit_sql"}, strlen=37) [2024-09-13 13:02:15.827063] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5013799567113964503, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_sstable_macro_info"}, strlen=39) [2024-09-13 13:02:15.827158] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14830728642989910137, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_partition_info"}, strlen=34) [2024-09-13 13:02:15.827203] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13603489201581434473, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_partition"}, strlen=29) [2024-09-13 13:02:15.827240] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7994610665089267825, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_sub_partition"}, strlen=33) [2024-09-13 13:02:15.827261] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4747451874374698747, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sys_task_status"}, strlen=29) [2024-09-13 13:02:15.827310] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18321215885376430367, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_macro_block_marker_status"}, strlen=39) [2024-09-13 13:02:15.827333] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=17831907667713727217, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_io_stat"}, strlen=21) [2024-09-13 13:02:15.827365] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8519204614051567739, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_long_ops_status"}, strlen=29) [2024-09-13 13:02:15.827405] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=233400546160387759, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_server_object_pool"}, strlen=32) [2024-09-13 13:02:15.827446] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6098701480593983745, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trans_lock_stat"}, strlen=29) [2024-09-13 13:02:15.827460] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16670391486764922061, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_create_tablegroup"}, strlen=39) [2024-09-13 13:02:15.827478] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9684979741576106851, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_server_blacklist"}, strlen=30) [2024-09-13 13:02:15.827512] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4804773615932141661, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_parameter_stat"}, strlen=35) [2024-09-13 13:02:15.827534] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8339999715751028429, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_server_schema_info"}, strlen=32) [2024-09-13 13:02:15.827559] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4372933476609421993, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_memory_context_stat"}, strlen=33) [2024-09-13 13:02:15.827626] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=454020862387476241, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dump_tenant_info"}, strlen=30) [2024-09-13 13:02:15.827658] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6982597829998253055, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_parameter_info"}, strlen=35) [2024-09-13 13:02:15.827671] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18232713114939768153, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_audit_operation"}, strlen=29) [2024-09-13 13:02:15.827689] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13356918981701430511, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_audit_action"}, strlen=26) [2024-09-13 13:02:15.827716] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12359944307016886891, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dag_warning_history"}, strlen=33) [2024-09-13 13:02:15.827737] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1637384019149870571, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_encrypt_info"}, strlen=33) [2024-09-13 13:02:15.827751] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3146496550883101155, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_restore_preview"}, strlen=37) [2024-09-13 13:02:15.827768] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16344561987762513411, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_master_key_version_info"}, strlen=37) [2024-09-13 13:02:15.827799] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14429749340777123195, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dag"}, strlen=17) [2024-09-13 13:02:15.827818] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14715506532307593681, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dag_scheduler"}, strlen=27) [2024-09-13 13:02:15.827851] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=243048139659601061, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_server_compaction_progress"}, strlen=40) [2024-09-13 13:02:15.827891] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3834124668751569893, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_compaction_progress"}, strlen=40) [2024-09-13 13:02:15.827914] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4519553089221726529, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_compaction_diagnose_info"}, strlen=38) [2024-09-13 13:02:15.827942] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=15614023263253954317, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_compaction_suggestion"}, strlen=35) [2024-09-13 13:02:15.827989] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7129529410050983945, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_session_info"}, strlen=26) [2024-09-13 13:02:15.828045] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13147890795620097223, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_compaction_history"}, strlen=39) [2024-09-13 13:02:15.828064] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1600877708693490759, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_io_calibration_status"}, strlen=35) [2024-09-13 13:02:15.828085] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4898063400552337509, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_io_benchmark"}, strlen=26) [2024-09-13 13:02:15.828114] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=1465653522913614701, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_io_quota"}, strlen=22) [2024-09-13 13:02:15.828134] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3605455190298286947, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_server_compaction_event_history"}, strlen=45) [2024-09-13 13:02:15.828149] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5868130371028614267, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ddl_sim_point"}, strlen=27) [2024-09-13 13:02:15.828166] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=5904591545118668583, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ddl_sim_point_stat"}, strlen=32) [2024-09-13 13:02:15.828176] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4010968026106459731, table={database_id:201002, name_case_mode:2, table_name:"SESSION_VARIABLES"}, strlen=17) [2024-09-13 13:02:15.828194] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=12183085506146544347, table={database_id:201002, name_case_mode:2, table_name:"GLOBAL_STATUS"}, strlen=13) [2024-09-13 13:02:15.828204] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17971612439695227793, table={database_id:201002, name_case_mode:2, table_name:"SESSION_STATUS"}, strlen=14) [2024-09-13 13:02:15.828301] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] set tenant space table name(key=16887988928553426911, table={database_id:201003, name_case_mode:2, table_name:"user"}, strlen=4) [2024-09-13 13:02:15.828344] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2946110349501038242, table={database_id:201003, name_case_mode:2, table_name:"db"}, strlen=2) [2024-09-13 13:02:15.828390] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11767149002664574223, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_lock_wait_stat"}, strlen=28) [2024-09-13 13:02:15.828444] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16785437781243172845, table={database_id:201003, name_case_mode:2, table_name:"proc"}, strlen=4) [2024-09-13 13:02:15.828463] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9373709181414260655, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_collation"}, strlen=26) [2024-09-13 13:02:15.828478] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=698041592007753371, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_charset"}, strlen=24) [2024-09-13 13:02:15.828506] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7686771750626269381, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_memstore_allocator_info"}, strlen=44) [2024-09-13 13:02:15.828547] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3975159249677897845, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_mgr"}, strlen=23) [2024-09-13 13:02:15.828571] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=12213057926634880251, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_freeze_info"}, strlen=25) [2024-09-13 13:02:15.828592] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5651337753085766071, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_bad_block_table"}, strlen=29) [2024-09-13 13:02:15.828618] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3184051855783834287, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_px_worker_stat"}, strlen=28) [2024-09-13 13:02:15.828639] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=540057813118157961, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_auto_increment"}, strlen=28) [2024-09-13 13:02:15.828655] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=7883789047354781169, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sequence_value"}, strlen=28) [2024-09-13 13:02:15.828743] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9797883523630614005, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_store_stat"}, strlen=31) [2024-09-13 13:02:15.828771] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4204286209984997437, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ddl_operation"}, strlen=27) [2024-09-13 13:02:15.828816] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=403399859139876099, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_outline"}, strlen=21) [2024-09-13 13:02:15.828861] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2587094168211168195, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_outline_history"}, strlen=29) [2024-09-13 13:02:15.828890] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17728413332431590231, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_synonym"}, strlen=21) [2024-09-13 13:02:15.828938] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=11952264632673044291, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_synonym_history"}, strlen=29) [2024-09-13 13:02:15.828978] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=12205109805863173929, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_database_privilege"}, strlen=32) [2024-09-13 13:02:15.829016] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5622413429548427137, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_database_privilege_history"}, strlen=40) [2024-09-13 13:02:15.829055] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10873134288822667535, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_privilege"}, strlen=29) [2024-09-13 13:02:15.829098] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17882259221661241491, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_privilege_history"}, strlen=37) [2024-09-13 13:02:15.829120] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14771492140714248861, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_database"}, strlen=22) [2024-09-13 13:02:15.829147] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9833893932480909, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_database_history"}, strlen=30) [2024-09-13 13:02:15.829184] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4189394883475384051, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablegroup"}, strlen=24) [2024-09-13 13:02:15.829223] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5094629874663768497, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablegroup_history"}, strlen=32) [2024-09-13 13:02:15.829389] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11398463348276985611, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table"}, strlen=19) [2024-09-13 13:02:15.829577] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=4608830451644128399, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_history"}, strlen=27) [2024-09-13 13:02:15.829652] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=7562389478707205431, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column"}, strlen=20) [2024-09-13 13:02:15.829723] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=16431477134343216393, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_history"}, strlen=28) [2024-09-13 13:02:15.829779] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8489996283756060637, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_part"}, strlen=18) [2024-09-13 13:02:15.829841] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=230522973419338389, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_part_history"}, strlen=26) [2024-09-13 13:02:15.829896] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14128098823668382455, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_part_info"}, strlen=23) [2024-09-13 13:02:15.829947] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13123746419565878103, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_part_info_history"}, strlen=31) [2024-09-13 13:02:15.829986] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10106353299013747501, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_def_sub_part"}, strlen=26) [2024-09-13 13:02:15.830026] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3723349451437439589, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_def_sub_part_history"}, strlen=34) [2024-09-13 13:02:15.830071] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1159724203221591333, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sub_part"}, strlen=22) [2024-09-13 13:02:15.830118] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13485400240322980621, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sub_part_history"}, strlen=30) [2024-09-13 13:02:15.830152] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=3717080332221137241, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_constraint"}, strlen=24) [2024-09-13 13:02:15.830184] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=2261503562025306545, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_constraint_history"}, strlen=32) [2024-09-13 13:02:15.830217] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4409443192218650679, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_foreign_key"}, strlen=25) [2024-09-13 13:02:15.830253] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14361527399166515819, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_foreign_key_history"}, strlen=33) [2024-09-13 13:02:15.830271] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5950001812682361007, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_foreign_key_column"}, strlen=32) [2024-09-13 13:02:15.830293] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2561377922478625665, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_foreign_key_column_history"}, strlen=40) [2024-09-13 13:02:15.830306] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6775471235914828521, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_temp_table"}, strlen=24) [2024-09-13 13:02:15.830322] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2619761066459048579, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ori_schema_version"}, strlen=32) [2024-09-13 13:02:15.830341] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=3599012369663730315, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sys_stat"}, strlen=22) [2024-09-13 13:02:15.830445] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11285558379627175171, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_user"}, strlen=18) [2024-09-13 13:02:15.830538] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13991047535965035541, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_user_history"}, strlen=26) [2024-09-13 13:02:15.830563] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13546331211903348077, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sys_variable"}, strlen=26) [2024-09-13 13:02:15.830591] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15203718889589478437, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sys_variable_history"}, strlen=34) [2024-09-13 13:02:15.830611] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13952794816638871429, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_func"}, strlen=18) [2024-09-13 13:02:15.830633] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=302374348645638613, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_func_history"}, strlen=26) [2024-09-13 13:02:15.830665] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8073781812480077855, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_package"}, strlen=21) [2024-09-13 13:02:15.830705] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=997892529133287171, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_package_history"}, strlen=29) [2024-09-13 13:02:15.830744] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8717918775075212819, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_routine"}, strlen=21) [2024-09-13 13:02:15.830784] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16857541619530486019, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_routine_history"}, strlen=29) [2024-09-13 13:02:15.830826] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10730994778248205851, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_routine_param"}, strlen=27) [2024-09-13 13:02:15.830882] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=9535769933649594591, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_routine_param_history"}, strlen=35) [2024-09-13 13:02:15.830923] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13863005444471519661, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_type"}, strlen=18) [2024-09-13 13:02:15.830963] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6760888573649057493, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_type_history"}, strlen=26) [2024-09-13 13:02:15.831001] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6555548921245691493, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_type_attr"}, strlen=23) [2024-09-13 13:02:15.831041] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=8412557849092588759, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_type_attr_history"}, strlen=31) [2024-09-13 13:02:15.831075] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16069758238323198479, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_coll_type"}, strlen=23) [2024-09-13 13:02:15.831111] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=948630518003741015, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_coll_type_history"}, strlen=31) [2024-09-13 13:02:15.831130] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5565045024453101299, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_recyclebin"}, strlen=24) [2024-09-13 13:02:15.831163] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=9856389798854850085, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sequence_object"}, strlen=29) [2024-09-13 13:02:15.831195] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15117924462640267155, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sequence_object_history"}, strlen=37) [2024-09-13 13:02:15.831229] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=9018600011392890789, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_raid_stat"}, strlen=23) [2024-09-13 13:02:15.831298] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5346947681947891153, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dtl_channel"}, strlen=25) [2024-09-13 13:02:15.831334] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10317028212445898625, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dtl_memory"}, strlen=24) [2024-09-13 13:02:15.831386] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7495290977220553369, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dblink"}, strlen=20) [2024-09-13 13:02:15.831448] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=39914602108813961, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dblink_history"}, strlen=28) [2024-09-13 13:02:15.831467] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10896661835979267033, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_role_grantee_map"}, strlen=37) [2024-09-13 13:02:15.831488] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12906519302648313763, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_role_grantee_map_history"}, strlen=45) [2024-09-13 13:02:15.831509] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2079997026815527299, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_keystore"}, strlen=29) [2024-09-13 13:02:15.831537] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17490817767971734035, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_keystore_history"}, strlen=37) [2024-09-13 13:02:15.831556] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=485769933102303643, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_policy"}, strlen=31) [2024-09-13 13:02:15.831578] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14691388718955392839, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_policy_history"}, strlen=39) [2024-09-13 13:02:15.831602] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=5034473838598928101, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_component"}, strlen=34) [2024-09-13 13:02:15.831629] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9983559006356479861, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_component_history"}, strlen=42) [2024-09-13 13:02:15.831648] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2561451508816412843, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_label"}, strlen=30) [2024-09-13 13:02:15.831670] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8383393710500269469, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_label_history"}, strlen=38) [2024-09-13 13:02:15.831693] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=15170142716402851629, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_user_level"}, strlen=35) [2024-09-13 13:02:15.831720] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4781830095091789679, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ols_user_level_history"}, strlen=43) [2024-09-13 13:02:15.831739] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12055934656033059667, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_tablespace"}, strlen=31) [2024-09-13 13:02:15.831762] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14437243768588124295, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_tablespace_history"}, strlen=39) [2024-09-13 13:02:15.831806] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15119970302480577059, table={database_id:201001, name_case_mode:2, table_name:"__ALL_VIRTUAL_INFORMATION_COLUMNS"}, strlen=33) [2024-09-13 13:02:15.831826] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=6863781887472262157, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_user_failed_login_stat"}, strlen=43) [2024-09-13 13:02:15.831854] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=8017982115028235945, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_profile"}, strlen=28) [2024-09-13 13:02:15.831894] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2721457751194233305, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_profile_history"}, strlen=36) [2024-09-13 13:02:15.831915] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16565928560198285159, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_security_audit"}, strlen=28) [2024-09-13 13:02:15.831939] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15453874706799428633, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_security_audit_history"}, strlen=36) [2024-09-13 13:02:15.832003] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15695521474672823901, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trigger"}, strlen=21) [2024-09-13 13:02:15.832068] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7389671326051986051, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_trigger_history"}, strlen=29) [2024-09-13 13:02:15.832089] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13049868170434991729, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ps_stat"}, strlen=21) [2024-09-13 13:02:15.832123] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14136636651706376169, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ps_item_info"}, strlen=26) [2024-09-13 13:02:15.832166] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18062948468190921669, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_workarea_history_stat"}, strlen=39) [2024-09-13 13:02:15.832204] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15455121100221849387, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_workarea_active"}, strlen=33) [2024-09-13 13:02:15.832230] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=422828323326750833, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_workarea_histogram"}, strlen=36) [2024-09-13 13:02:15.832259] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3079069077185636353, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_workarea_memory_info"}, strlen=38) [2024-09-13 13:02:15.832343] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7133451772835048223, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_security_audit_record"}, strlen=35) [2024-09-13 13:02:15.832360] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7228018561142438595, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sysauth"}, strlen=21) [2024-09-13 13:02:15.832379] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12846125774171091971, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sysauth_history"}, strlen=29) [2024-09-13 13:02:15.832402] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1031644095001014915, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_objauth"}, strlen=21) [2024-09-13 13:02:15.832428] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7612607355932228675, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_objauth_history"}, strlen=29) [2024-09-13 13:02:15.832449] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=11306438604387822971, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_info"}, strlen=25) [2024-09-13 13:02:15.832482] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6133276299577641357, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_error"}, strlen=19) [2024-09-13 13:02:15.832508] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9531929556517464041, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_id_service"}, strlen=24) [2024-09-13 13:02:15.832559] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] set tenant space table name(key=9012299189202494091, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_object_type"}, strlen=25) [2024-09-13 13:02:15.832661] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5599497167657472551, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_plan_monitor"}, strlen=30) [2024-09-13 13:02:15.832679] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2901527188234334077, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_monitor_statname"}, strlen=34) [2024-09-13 13:02:15.832713] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13461434051077489945, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_open_cursor"}, strlen=25) [2024-09-13 13:02:15.832727] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15974226688182846823, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_time_zone"}, strlen=23) [2024-09-13 13:02:15.832741] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3845598664882503553, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_time_zone_name"}, strlen=28) [2024-09-13 13:02:15.832757] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9206088101880571647, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_time_zone_transition"}, strlen=34) [2024-09-13 13:02:15.832775] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12110693898284894383, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_time_zone_transition_type"}, strlen=39) [2024-09-13 13:02:15.832795] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4349716677028971385, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_constraint_column"}, strlen=31) [2024-09-13 13:02:15.832814] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3902236903292639367, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_constraint_column_history"}, strlen=39) [2024-09-13 13:02:15.832902] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=17419216990975841511, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_files"}, strlen=19) [2024-09-13 13:02:15.832936] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6450260072801964657, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dependency"}, strlen=24) [2024-09-13 13:02:15.832965] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=17891045281809288447, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_object_definition"}, strlen=34) [2024-09-13 13:02:15.832995] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14355066443958061683, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_global_transaction"}, strlen=32) [2024-09-13 13:02:15.833029] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3628010422253109179, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ddl_task_status"}, strlen=29) [2024-09-13 13:02:15.833072] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6750277639978755225, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_deadlock_event_history"}, strlen=36) [2024-09-13 13:02:15.833109] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7779057099719093317, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_usage"}, strlen=26) [2024-09-13 13:02:15.833132] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4969690976617040677, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_ctx_memory_info"}, strlen=36) [2024-09-13 13:02:15.833171] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11735107274888830045, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_job"}, strlen=17) [2024-09-13 13:02:15.833190] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15293252979412049315, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_job_log"}, strlen=21) [2024-09-13 13:02:15.833212] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=16523867938420976589, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_directory"}, strlen=30) [2024-09-13 13:02:15.833233] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2862981526483453981, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_directory_history"}, strlen=38) [2024-09-13 13:02:15.833280] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4628984733493714743, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_stat"}, strlen=24) [2024-09-13 13:02:15.833332] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17730013269317283161, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_stat"}, strlen=25) [2024-09-13 13:02:15.833359] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11721256880206584943, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_histogram_stat"}, strlen=28) [2024-09-13 13:02:15.833376] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11262788829881758221, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_memory_info"}, strlen=32) [2024-09-13 13:02:15.833396] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14448741272763703023, table={database_id:201001, name_case_mode:2, table_name:"__tenant_virtual_show_create_trigger"}, strlen=36) [2024-09-13 13:02:15.833421] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7620780125860384401, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_px_target_monitor"}, strlen=31) [2024-09-13 13:02:15.833456] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9352088221461444127, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_monitor_modified"}, strlen=30) [2024-09-13 13:02:15.833502] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11704116115689029617, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_stat_history"}, strlen=32) [2024-09-13 13:02:15.833559] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=12492210588049866667, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_stat_history"}, strlen=33) [2024-09-13 13:02:15.833597] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11240216773464383513, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_histogram_stat_history"}, strlen=36) [2024-09-13 13:02:15.833622] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15587507658764508021, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_optstat_global_prefs"}, strlen=34) [2024-09-13 13:02:15.833644] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6171141165154741809, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_optstat_user_prefs"}, strlen=32) [2024-09-13 13:02:15.833693] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1724892932833849851, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dblink_info"}, strlen=25) [2024-09-13 13:02:15.833743] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14759636445393915577, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_log_archive_progress"}, strlen=34) [2024-09-13 13:02:15.833786] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17357481071210235371, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_log_archive_history"}, strlen=33) [2024-09-13 13:02:15.833831] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12362983739570776107, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_log_archive_piece_files"}, strlen=37) [2024-09-13 13:02:15.833866] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15139794031340849027, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_log_archive_progress"}, strlen=37) [2024-09-13 13:02:15.833897] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=7729278448204384555, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_storage_info"}, strlen=33) [2024-09-13 13:02:15.833932] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15447288619456367615, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_status"}, strlen=23) [2024-09-13 13:02:15.833951] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13028989530138550297, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls"}, strlen=16) [2024-09-13 13:02:15.833992] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9284250887884948155, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_meta_table"}, strlen=27) [2024-09-13 13:02:15.834019] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5493912404109002883, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_meta_table"}, strlen=31) [2024-09-13 13:02:15.834036] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3588497416681679573, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_to_ls"}, strlen=26) [2024-09-13 13:02:15.834114] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12100763325643629103, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_load_data_stat"}, strlen=28) [2024-09-13 13:02:15.834132] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18117172864998676173, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dam_last_arch_ts"}, strlen=30) [2024-09-13 13:02:15.834155] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1537603850334117277, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dam_cleanup_jobs"}, strlen=30) [2024-09-13 13:02:15.834211] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3044850077559108747, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_task"}, strlen=25) [2024-09-13 13:02:15.834265] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17202505655549402603, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_task_history"}, strlen=33) [2024-09-13 13:02:15.834333] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=17047651074108843053, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_ls_task"}, strlen=28) [2024-09-13 13:02:15.834393] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11389951488068771609, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_ls_task_history"}, strlen=36) [2024-09-13 13:02:15.834432] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15054984140751875691, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_ls_task_info"}, strlen=33) [2024-09-13 13:02:15.834463] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=9274573885911750957, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_skipped_tablet"}, strlen=35) [2024-09-13 13:02:15.834486] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17493899479825934703, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_skipped_tablet_history"}, strlen=43) [2024-09-13 13:02:15.834519] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=7509444197035147337, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_schedule_task"}, strlen=34) [2024-09-13 13:02:15.834536] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10869902154882669011, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_to_table_history"}, strlen=37) [2024-09-13 13:02:15.834585] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12546918956490930507, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_log_stat"}, strlen=22) [2024-09-13 13:02:15.834641] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=6701865584114924347, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_info"}, strlen=25) [2024-09-13 13:02:15.834674] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=11855718107486377787, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_recovery_stat"}, strlen=30) [2024-09-13 13:02:15.834730] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] set tenant space table name(key=12587755641033976731, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_ls_task_info_history"}, strlen=41) [2024-09-13 13:02:15.834765] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=18406284742543154199, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_replica_checksum"}, strlen=37) [2024-09-13 13:02:15.834795] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=1205737448185443981, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ddl_checksum"}, strlen=26) [2024-09-13 13:02:15.834838] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5925394552384815091, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ddl_error_message"}, strlen=31) [2024-09-13 13:02:15.834907] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=17271418239381163395, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_replica_task"}, strlen=29) [2024-09-13 13:02:15.834941] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=7711276800779762705, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_pending_transaction"}, strlen=33) [2024-09-13 13:02:15.835027] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16815142292578973063, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_scheduler_job"}, strlen=34) [2024-09-13 13:02:15.835046] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8565889842710766529, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_scheduler_job_run_detail"}, strlen=45) [2024-09-13 13:02:15.835086] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9539175925425545117, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_scheduler_program"}, strlen=38) [2024-09-13 13:02:15.835114] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11425374051081425547, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_scheduler_program_argument"}, strlen=47) [2024-09-13 13:02:15.835147] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=3916000153522362885, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_context"}, strlen=28) [2024-09-13 13:02:15.835174] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9533827165629233177, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_context_history"}, strlen=36) [2024-09-13 13:02:15.835198] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7814232905113498645, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_global_context_value"}, strlen=34) [2024-09-13 13:02:15.835236] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=11860989642212446163, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_unit"}, strlen=18) [2024-09-13 13:02:15.835279] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14951763112884753087, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_server"}, strlen=20) [2024-09-13 13:02:15.835299] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9351457796962509085, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_election_reference_info"}, strlen=40) [2024-09-13 13:02:15.835338] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=592293780414098497, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dtl_interm_result_monitor"}, strlen=39) [2024-09-13 13:02:15.835384] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1060854932764735443, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_archive_stat"}, strlen=26) [2024-09-13 13:02:15.835406] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9539041885416743927, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_apply_stat"}, strlen=24) [2024-09-13 13:02:15.835429] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10496609065609346969, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_replay_stat"}, strlen=25) [2024-09-13 13:02:15.835473] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=18] set tenant space table name(key=11413213707988076431, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_proxy_routine"}, strlen=27) [2024-09-13 13:02:15.835516] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10358779125222260293, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_task"}, strlen=32) [2024-09-13 13:02:15.835572] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=11599824501998688193, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_task_history"}, strlen=40) [2024-09-13 13:02:15.835616] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=11154654444260157799, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_ls_task"}, strlen=35) [2024-09-13 13:02:15.835675] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3485147956206526255, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_ls_task_history"}, strlen=43) [2024-09-13 13:02:15.835732] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=343669036260397667, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_info"}, strlen=21) [2024-09-13 13:02:15.835791] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13889191828480646139, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_info"}, strlen=25) [2024-09-13 13:02:15.835832] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=6079923593706602753, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_obj_lock"}, strlen=22) [2024-09-13 13:02:15.835870] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6472093144422145235, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_zone_merge_info"}, strlen=29) [2024-09-13 13:02:15.835908] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] set tenant space table name(key=1945215902301811325, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_merge_info"}, strlen=24) [2024-09-13 13:02:15.835937] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15964639146969719227, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tx_data_table"}, strlen=27) [2024-09-13 13:02:15.835959] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7673918398027639835, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_transaction_freeze_checkpoint"}, strlen=43) [2024-09-13 13:02:15.835979] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15624299328231463817, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_transaction_checkpoint"}, strlen=36) [2024-09-13 13:02:15.835997] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15798389760806859297, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_checkpoint"}, strlen=24) [2024-09-13 13:02:15.836075] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=15531534655917745073, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_set_files"}, strlen=30) [2024-09-13 13:02:15.836119] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=71109448549058811, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_job"}, strlen=24) [2024-09-13 13:02:15.836160] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12387585157070926513, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_job_history"}, strlen=32) [2024-09-13 13:02:15.836177] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=12350315148594123023, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_plan_baseline"}, strlen=27) [2024-09-13 13:02:15.836217] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6183609270707877497, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_plan_baseline_item"}, strlen=32) [2024-09-13 13:02:15.836235] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3151019715489723885, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_spm_config"}, strlen=24) [2024-09-13 13:02:15.836353] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13132184332004101493, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ash"}, strlen=17) [2024-09-13 13:02:15.836376] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=11918594671119802407, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dml_stats"}, strlen=23) [2024-09-13 13:02:15.836392] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13036576026545251339, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_log_archive_dest_parameter"}, strlen=40) [2024-09-13 13:02:15.836408] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16093634333412927151, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_parameter"}, strlen=30) [2024-09-13 13:02:15.836425] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10155872605547575013, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_restore_job"}, strlen=25) [2024-09-13 13:02:15.836501] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14185053981704344363, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_restore_job_history"}, strlen=33) [2024-09-13 13:02:15.836524] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1928064507193205857, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_restore_progress"}, strlen=30) [2024-09-13 13:02:15.836560] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5810440368445130859, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_restore_progress"}, strlen=33) [2024-09-13 13:02:15.836595] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=17530758321575504433, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_restore_history"}, strlen=32) [2024-09-13 13:02:15.836620] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1170841563710246107, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_storage_info_history"}, strlen=41) [2024-09-13 13:02:15.836672] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=4703139222417769329, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_job"}, strlen=31) [2024-09-13 13:02:15.836717] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10632192905468537543, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_job_history"}, strlen=39) [2024-09-13 13:02:15.836742] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11000215179100375213, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_backup_delete_policy"}, strlen=34) [2024-09-13 13:02:15.836767] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5876935806899258221, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_ddl_kv_info"}, strlen=32) [2024-09-13 13:02:15.836781] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14873430687965254563, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_privilege"}, strlen=23) [2024-09-13 13:02:15.836814] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2089366038399339591, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_pointer_status"}, strlen=35) [2024-09-13 13:02:15.836836] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10627082229730709285, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_storage_meta_memory_status"}, strlen=40) [2024-09-13 13:02:15.836869] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=8939272523027070653, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_kvcache_store_memblock"}, strlen=36) [2024-09-13 13:02:15.836905] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=9216959591761298069, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mock_fk_parent_table"}, strlen=34) [2024-09-13 13:02:15.836926] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4129838943307665781, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mock_fk_parent_table_history"}, strlen=42) [2024-09-13 13:02:15.836949] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=984891633015355805, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mock_fk_parent_table_column"}, strlen=41) [2024-09-13 13:02:15.836969] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5460854150695835531, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mock_fk_parent_table_column_history"}, strlen=49) [2024-09-13 13:02:15.836988] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8919811956763903017, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_log_restore_source"}, strlen=32) [2024-09-13 13:02:15.837006] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10954304872243212639, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_query_response_time"}, strlen=33) [2024-09-13 13:02:15.837039] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6545585155718650571, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_kv_ttl_task"}, strlen=25) [2024-09-13 13:02:15.837069] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18042509731592680363, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_kv_ttl_task_history"}, strlen=33) [2024-09-13 13:02:15.837098] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13998009241548891997, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_checksum_error_info"}, strlen=40) [2024-09-13 13:02:15.837117] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12075422687990272449, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_kvcache_handle_leak_info"}, strlen=38) [2024-09-13 13:02:15.837139] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2292498830973454709, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_compaction_info"}, strlen=36) [2024-09-13 13:02:15.837175] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17021727362987859019, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_replica_task_plan"}, strlen=34) [2024-09-13 13:02:15.837204] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13644660654514962207, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_schema_memory"}, strlen=27) [2024-09-13 13:02:15.837237] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=17356939664844236393, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_schema_slot"}, strlen=25) [2024-09-13 13:02:15.837270] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4071040417179486983, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_minor_freeze_info"}, strlen=31) [2024-09-13 13:02:15.837310] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1487015695059130217, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_show_trace"}, strlen=24) [2024-09-13 13:02:15.837384] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=1291559025050508855, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ha_diagnose"}, strlen=25) [2024-09-13 13:02:15.837407] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=8085407918903625525, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_data_dictionary_in_log"}, strlen=36) [2024-09-13 13:02:15.837458] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1668431628577824439, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_transfer_task"}, strlen=27) [2024-09-13 13:02:15.837508] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16238595714709201631, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_transfer_task_history"}, strlen=35) [2024-09-13 13:02:15.837533] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2052785859105055013, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_balance_job"}, strlen=25) [2024-09-13 13:02:15.837565] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14209611581564978411, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_balance_job_history"}, strlen=33) [2024-09-13 13:02:15.837610] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=16568327804104172345, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_balance_task"}, strlen=26) [2024-09-13 13:02:15.837655] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11268964407024390053, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_balance_task_history"}, strlen=34) [2024-09-13 13:02:15.837684] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5834086537654077777, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_policy"}, strlen=24) [2024-09-13 13:02:15.837720] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=11527250685653204145, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_policy_history"}, strlen=32) [2024-09-13 13:02:15.837735] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4391310970442466989, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_security_column"}, strlen=33) [2024-09-13 13:02:15.837754] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13746413204235921179, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_security_column_history"}, strlen=41) [2024-09-13 13:02:15.837771] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1247673376743413805, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_group"}, strlen=23) [2024-09-13 13:02:15.837804] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=18138749897546578327, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_group_history"}, strlen=31) [2024-09-13 13:02:15.837824] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11434448096665393395, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_context"}, strlen=25) [2024-09-13 13:02:15.837844] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4054725959564269867, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_context_history"}, strlen=33) [2024-09-13 13:02:15.837867] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=5969152825870287687, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_attribute"}, strlen=27) [2024-09-13 13:02:15.837895] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=9230262160036425951, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_rls_attribute_history"}, strlen=35) [2024-09-13 13:02:15.837931] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17034345015754626881, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_mysql_sys_agent"}, strlen=36) [2024-09-13 13:02:15.838038] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8067658311674277467, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sql_plan"}, strlen=22) [2024-09-13 13:02:15.838057] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13612515769418554729, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_core_table"}, strlen=24) [2024-09-13 13:02:15.838080] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=9402952207825142477, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_malloc_sample_info"}, strlen=32) [2024-09-13 13:02:15.838103] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5144966946066999899, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_arb_replica_task"}, strlen=33) [2024-09-13 13:02:15.838133] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18198900394465886683, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_arb_replica_task_history"}, strlen=41) [2024-09-13 13:02:15.838151] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15297394115237879971, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_archive_dest_status"}, strlen=33) [2024-09-13 13:02:15.838174] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=13005540884162415703, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_io_scheduler"}, strlen=26) [2024-09-13 13:02:15.838200] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=17728228981306809755, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_external_table_file"}, strlen=33) [2024-09-13 13:02:15.838253] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1860983463288664109, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mds_node_stat"}, strlen=27) [2024-09-13 13:02:15.838295] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=10530148734795515671, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_mds_event_history"}, strlen=31) [2024-09-13 13:02:15.838330] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5968137363206907135, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dup_ls_lease_mgr"}, strlen=30) [2024-09-13 13:02:15.838360] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15136874716734464437, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dup_ls_tablet_set"}, strlen=31) [2024-09-13 13:02:15.838382] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8598226558419424097, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_dup_ls_tablets"}, strlen=28) [2024-09-13 13:02:15.838406] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14995035787260068275, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tx_data"}, strlen=21) [2024-09-13 13:02:15.838443] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16042482742831708917, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_task_opt_stat_gather_history"}, strlen=42) [2024-09-13 13:02:15.838473] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13772490704032415727, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_table_opt_stat_gather_history"}, strlen=43) [2024-09-13 13:02:15.838509] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7721711920597138893, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_opt_stat_gather_monitor"}, strlen=37) [2024-09-13 13:02:15.838543] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=8131848130447112427, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_thread"}, strlen=20) [2024-09-13 13:02:15.838569] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11231016621379798787, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_arbitration_member_info"}, strlen=37) [2024-09-13 13:02:15.838584] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13682262039954657957, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_arbitration_service_status"}, strlen=40) [2024-09-13 13:02:15.838665] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8104150357795778823, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_active_session_history"}, strlen=39) [2024-09-13 13:02:15.838687] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2451381255180576193, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_snapshot"}, strlen=25) [2024-09-13 13:02:15.838700] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12651551067153809327, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_statname"}, strlen=25) [2024-09-13 13:02:15.838719] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17873980323221140663, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_sysstat"}, strlen=24) [2024-09-13 13:02:15.838751] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1493344956298490013, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_kv_connection"}, strlen=27) [2024-09-13 13:02:15.838782] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9989051053516796343, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_virtual_long_ops_status_mysql_sys_agent"}, strlen=53) [2024-09-13 13:02:15.838801] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15844349189330946855, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_transfer_member_list_lock_info"}, strlen=47) [2024-09-13 13:02:15.838830] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=1657946496038858739, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_timestamp_service"}, strlen=31) [2024-09-13 13:02:15.838853] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1286434179101877883, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_resource_pool_mysql_sys_agent"}, strlen=43) [2024-09-13 13:02:15.838883] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8963527203131936283, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_px_p2p_datahub"}, strlen=28) [2024-09-13 13:02:15.838905] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11952147885549452999, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_log_restore_status"}, strlen=35) [2024-09-13 13:02:15.838938] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11460087502655244783, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_parameter"}, strlen=30) [2024-09-13 13:02:15.838961] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6152974097711947533, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tablet_buffer_info"}, strlen=32) [2024-09-13 13:02:15.838983] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7905855757146812175, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_control"}, strlen=24) [2024-09-13 13:02:15.839027] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=3372244551647335781, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_event_history"}, strlen=34) [2024-09-13 13:02:15.839047] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17064265643146925141, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_balance_task_helper"}, strlen=33) [2024-09-13 13:02:15.839067] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12096839057539653341, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_balance_group_ls_stat"}, strlen=35) [2024-09-13 13:02:15.839091] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15964651543123332159, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_cgroup_config"}, strlen=27) [2024-09-13 13:02:15.839110] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4289222286364158957, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_flt_config"}, strlen=24) [2024-09-13 13:02:15.839132] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4411403517229041573, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_scheduler_job_class"}, strlen=40) [2024-09-13 13:02:15.839213] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13536547784777408817, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_recover_table_job"}, strlen=31) [2024-09-13 13:02:15.839290] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7973472036953049223, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_recover_table_job_history"}, strlen=39) [2024-09-13 13:02:15.839367] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6966368003139150063, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_import_table_job"}, strlen=30) [2024-09-13 13:02:15.839447] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4751390230936711261, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_import_table_job_history"}, strlen=38) [2024-09-13 13:02:15.839524] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11286289711302332047, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_import_table_task"}, strlen=31) [2024-09-13 13:02:15.839597] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16092162188021118343, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_import_table_task_history"}, strlen=39) [2024-09-13 13:02:15.839624] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9229442554812877735, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_data_activity_metrics"}, strlen=35) [2024-09-13 13:02:15.839659] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=10718475031531619939, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_storage_ha_error_diagnose"}, strlen=39) [2024-09-13 13:02:15.839693] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1412388920416779901, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_storage_ha_perf_diagnose"}, strlen=38) [2024-09-13 13:02:15.839738] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3771098543848547687, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_checkpoint_diagnose_memtable_info"}, strlen=47) [2024-09-13 13:02:15.839770] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9759733904445142113, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_checkpoint_diagnose_checkpoint_unit_info"}, strlen=54) [2024-09-13 13:02:15.839792] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14692758621090566273, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_checkpoint_diagnose_info"}, strlen=38) [2024-09-13 13:02:15.839813] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10119511710787131011, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_system_event"}, strlen=29) [2024-09-13 13:02:15.839834] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3156274413447503307, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_event_name"}, strlen=27) [2024-09-13 13:02:15.839880] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=2655950866746185399, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_scheduler_running_job"}, strlen=42) [2024-09-13 13:02:15.839904] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2050364268541204883, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_routine_privilege"}, strlen=31) [2024-09-13 13:02:15.839931] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13839826595388648007, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_routine_privilege_history"}, strlen=39) [2024-09-13 13:02:15.840048] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=5181494283587313153, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sqlstat"}, strlen=21) [2024-09-13 13:02:15.840160] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7920741514584227639, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_sqlstat"}, strlen=24) [2024-09-13 13:02:15.840180] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17911465648816144107, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_aux_stat"}, strlen=22) [2024-09-13 13:02:15.840191] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2001432699442288201, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_sys_variable_default_value"}, strlen=40) [2024-09-13 13:02:15.840215] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16111023212303507763, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_transfer_partition_task"}, strlen=37) [2024-09-13 13:02:15.840242] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14324840299070848675, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_transfer_partition_task_history"}, strlen=45) [2024-09-13 13:02:15.840260] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5041175817293940637, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_wr_sqltext"}, strlen=24) [2024-09-13 13:02:15.840302] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2571263332964794257, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_index_usage_info"}, strlen=30) [2024-09-13 13:02:15.840320] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5000866229888357835, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_audit_log_filter"}, strlen=30) [2024-09-13 13:02:15.840337] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3813316034198982879, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_audit_log_user"}, strlen=28) [2024-09-13 13:02:15.840364] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=5114135863185170653, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_privilege"}, strlen=30) [2024-09-13 13:02:15.840386] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] set tenant space table name(key=3024565239093516573, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_column_privilege_history"}, strlen=38) [2024-09-13 13:02:15.840400] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9067502545600094555, table={database_id:201002, name_case_mode:2, table_name:"ENABLED_ROLES"}, strlen=13) [2024-09-13 13:02:15.840459] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6150985665985824275, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_ls_replica_task_history"}, strlen=37) [2024-09-13 13:02:15.840488] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1865455977160712787, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_session_ps_info"}, strlen=29) [2024-09-13 13:02:15.840511] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8926106621815740627, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tracepoint_info"}, strlen=29) [2024-09-13 13:02:15.840527] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6253777733840673709, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_compatibility_control"}, strlen=35) [2024-09-13 13:02:15.840546] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16868688502280647699, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_user_proxy_info"}, strlen=29) [2024-09-13 13:02:15.840566] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1377437659161167251, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_user_proxy_info_history"}, strlen=37) [2024-09-13 13:02:15.840584] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11138645184115358713, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_user_proxy_role_info"}, strlen=34) [2024-09-13 13:02:15.840603] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=10353420629622781685, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_user_proxy_role_info_history"}, strlen=42) [2024-09-13 13:02:15.840625] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=2879425852841045007, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_service"}, strlen=21) [2024-09-13 13:02:15.840651] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6929023576706388717, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_resource_limit"}, strlen=35) [2024-09-13 13:02:15.840669] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13246605072146993839, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_tenant_resource_limit_detail"}, strlen=42) [2024-09-13 13:02:15.840724] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1066808762073345267, table={database_id:201001, name_case_mode:2, table_name:"__all_virtual_scheduler_job_run_detail_v2"}, strlen=41) [2024-09-13 13:02:15.840847] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17503530014375511261, table={database_id:201001, name_case_mode:2, table_name:"__idx_12302_all_virtual_ash_i1"}, strlen=30) [2024-09-13 13:02:15.840959] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=16239231304054327411, table={database_id:201001, name_case_mode:2, table_name:"__idx_12185_all_virtual_sql_plan_monitor_i1"}, strlen=43) [2024-09-13 13:02:15.841202] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7789048750964705481, table={database_id:201001, name_case_mode:2, table_name:"__idx_11031_all_virtual_sql_audit_i1"}, strlen=36) [2024-09-13 13:02:15.841227] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=14664382071044604101, table={database_id:201001, name_case_mode:2, table_name:"__idx_11021_all_virtual_sysstat_i1"}, strlen=34) [2024-09-13 13:02:15.841246] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9878436345123346437, table={database_id:201001, name_case_mode:2, table_name:"__idx_11020_all_virtual_sesstat_i1"}, strlen=34) [2024-09-13 13:02:15.841283] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=14727691026288464667, table={database_id:201001, name_case_mode:2, table_name:"__idx_11017_all_virtual_system_event_i1"}, strlen=39) [2024-09-13 13:02:15.841318] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1104783573546073227, table={database_id:201001, name_case_mode:2, table_name:"__idx_11015_all_virtual_session_wait_history_i1"}, strlen=47) [2024-09-13 13:02:15.841355] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11558598683348067675, table={database_id:201001, name_case_mode:2, table_name:"__idx_11014_all_virtual_session_wait_i1"}, strlen=39) [2024-09-13 13:02:15.841387] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=685544362751320801, table={database_id:201001, name_case_mode:2, table_name:"__idx_11013_all_virtual_session_event_i1"}, strlen=40) [2024-09-13 13:02:15.841486] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16792499377288307349, table={database_id:201001, name_case_mode:2, table_name:"__idx_11003_all_virtual_plan_cache_stat_i1"}, strlen=42) [2024-09-13 13:02:15.841732] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=10958272015631773073, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_AUDIT"}, strlen=21) [2024-09-13 13:02:15.841910] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=16421534873438116337, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PLAN_STAT"}, strlen=21) [2024-09-13 13:02:15.841941] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=13558547876718044153, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PLAN_CACHE_PLAN_EXPLAIN"}, strlen=35) [2024-09-13 13:02:15.841989] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=954675123952698065, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_OUTLINE_AGENT"}, strlen=28) [2024-09-13 13:02:15.842002] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=12151405100145753503, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PRIVILEGE"}, strlen=21) [2024-09-13 13:02:15.842060] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=15783705509503860481, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SYS_PARAMETER_STAT_AGENT"}, strlen=36) [2024-09-13 13:02:15.842112] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=1240298715878553129, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_TABLE_INDEX_AGENT"}, strlen=32) [2024-09-13 13:02:15.842136] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=6619366707306986001, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_CHARSET_AGENT"}, strlen=28) [2024-09-13 13:02:15.842204] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=18138649733327172245, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_ALL_TABLE_AGENT"}, strlen=30) [2024-09-13 13:02:15.842234] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=5910669776223974579, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_COLLATION"}, strlen=24) [2024-09-13 13:02:15.842288] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15064587668953574105, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SERVER_AGENT"}, strlen=24) [2024-09-13 13:02:15.842452] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=17400888474716665069, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PLAN_CACHE_STAT"}, strlen=27) [2024-09-13 13:02:15.842600] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=6010558242954583817, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PROCESSLIST"}, strlen=23) [2024-09-13 13:02:15.842667] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=13075372594361221059, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SESSION_WAIT"}, strlen=24) [2024-09-13 13:02:15.842727] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=3057877921373148913, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SESSION_WAIT_HISTORY"}, strlen=32) [2024-09-13 13:02:15.842780] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=18064947967558857015, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_MEMORY_INFO"}, strlen=23) [2024-09-13 13:02:15.842823] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] set tenant space table name(key=13859510231782735949, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_MEMSTORE_INFO"}, strlen=32) [2024-09-13 13:02:15.842921] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=4755370316219665787, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_MEMSTORE_INFO"}, strlen=25) [2024-09-13 13:02:15.842954] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=3800891364362070909, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SESSTAT"}, strlen=19) [2024-09-13 13:02:15.842998] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=5576264371222165885, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SYSSTAT"}, strlen=19) [2024-09-13 13:02:15.843045] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=3571432000906451209, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SYSTEM_EVENT"}, strlen=24) [2024-09-13 13:02:15.843060] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1616618779671534147, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_SESSION_VARIABLE"}, strlen=31) [2024-09-13 13:02:15.843071] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17092266054732424853, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_GLOBAL_VARIABLE"}, strlen=30) [2024-09-13 13:02:15.843089] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12453863776852237401, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_SHOW_CREATE_TABLE"}, strlen=32) [2024-09-13 13:02:15.843120] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10750467716708913161, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_SHOW_CREATE_PROCEDURE"}, strlen=36) [2024-09-13 13:02:15.843133] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5020331436799812609, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_SHOW_CREATE_TABLEGROUP"}, strlen=37) [2024-09-13 13:02:15.843149] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=11959848630817593517, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_PRIVILEGE_GRANT"}, strlen=30) [2024-09-13 13:02:15.843176] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=8521607528891372577, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_TABLE_COLUMN"}, strlen=27) [2024-09-13 13:02:15.843215] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15886079649307873263, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRACE_SPAN_INFO"}, strlen=27) [2024-09-13 13:02:15.843240] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=17427277139297029807, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_CONCURRENT_LIMIT_SQL_AGENT"}, strlen=41) [2024-09-13 13:02:15.843252] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=2072573508439406499, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DATA_TYPE"}, strlen=21) [2024-09-13 13:02:15.843265] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7328156346192825341, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_AUDIT_OPERATION"}, strlen=27) [2024-09-13 13:02:15.843276] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4979944919678583043, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_AUDIT_ACTION"}, strlen=24) [2024-09-13 13:02:15.843301] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7490892679011979155, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PX_WORKER_STAT"}, strlen=26) [2024-09-13 13:02:15.843321] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18047997801730839805, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PS_STAT"}, strlen=19) [2024-09-13 13:02:15.843350] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5725745366666107581, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PS_ITEM_INFO"}, strlen=24) [2024-09-13 13:02:15.843389] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=709454584256138057, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_PARAMETER_STAT"}, strlen=33) [2024-09-13 13:02:15.843433] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14745886186380716369, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_WORKAREA_HISTORY_STAT"}, strlen=37) [2024-09-13 13:02:15.843476] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=493140323273095767, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_WORKAREA_ACTIVE"}, strlen=31) [2024-09-13 13:02:15.843499] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13701859188528486501, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_WORKAREA_HISTOGRAM"}, strlen=34) [2024-09-13 13:02:15.843528] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=12820534881080475365, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_WORKAREA_MEMORY_INFO"}, strlen=36) [2024-09-13 13:02:15.843565] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1558734771232747473, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLE_MGR"}, strlen=21) [2024-09-13 13:02:15.843585] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=912614872360138193, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SERVER_SCHEMA_INFO"}, strlen=30) [2024-09-13 13:02:15.843670] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5576142519251074051, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_PLAN_MONITOR"}, strlen=28) [2024-09-13 13:02:15.843687] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11550450796932981881, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_MONITOR_STATNAME"}, strlen=32) [2024-09-13 13:02:15.843727] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3222509492869381907, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOCK_WAIT_STAT"}, strlen=26) [2024-09-13 13:02:15.843766] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] set tenant space table name(key=14766895581699948045, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OPEN_CURSOR"}, strlen=23) [2024-09-13 13:02:15.843798] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6726961343221200067, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_OBJECT_DEFINITION"}, strlen=32) [2024-09-13 13:02:15.843844] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=8399779441820560587, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ROUTINE_PARAM_SYS_AGENT"}, strlen=35) [2024-09-13 13:02:15.843891] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4091649778928606749, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TYPE_SYS_AGENT"}, strlen=26) [2024-09-13 13:02:15.843931] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3323502805254460675, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TYPE_ATTR_SYS_AGENT"}, strlen=31) [2024-09-13 13:02:15.843967] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18188473784694160835, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COLL_TYPE_SYS_AGENT"}, strlen=31) [2024-09-13 13:02:15.843997] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12217158526274195943, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PACKAGE_SYS_AGENT"}, strlen=29) [2024-09-13 13:02:15.844062] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3392251834176825089, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TRIGGER_SYS_AGENT"}, strlen=36) [2024-09-13 13:02:15.844101] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=240307410300616103, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ROUTINE_SYS_AGENT"}, strlen=29) [2024-09-13 13:02:15.844129] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5843102546781691751, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_GLOBAL_TRANSACTION"}, strlen=30) [2024-09-13 13:02:15.844302] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15634757523011953937, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLE_REAL_AGENT"}, strlen=28) [2024-09-13 13:02:15.844371] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6385271315226289767, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COLUMN_REAL_AGENT"}, strlen=29) [2024-09-13 13:02:15.844393] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3570238883768216131, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DATABASE_REAL_AGENT"}, strlen=31) [2024-09-13 13:02:15.844416] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=8959162939136035799, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_AUTO_INCREMENT_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.844476] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13095962599989623387, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PART_REAL_AGENT"}, strlen=27) [2024-09-13 13:02:15.844522] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9710273396038989187, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SUB_PART_REAL_AGENT"}, strlen=31) [2024-09-13 13:02:15.844553] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10139470045281449045, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PACKAGE_REAL_AGENT"}, strlen=30) [2024-09-13 13:02:15.844567] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9745109132667636759, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SEQUENCE_VALUE_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.844600] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17961325251060528293, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SEQUENCE_OBJECT_REAL_AGENT"}, strlen=38) [2024-09-13 13:02:15.844687] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3790209578171387355, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_USER_REAL_AGENT"}, strlen=27) [2024-09-13 13:02:15.844714] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15220618193421138581, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SYNONYM_REAL_AGENT"}, strlen=30) [2024-09-13 13:02:15.844748] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4688348015648221869, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_FOREIGN_KEY_REAL_AGENT"}, strlen=34) [2024-09-13 13:02:15.844767] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7788941402726656223, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RECYCLEBIN_REAL_AGENT"}, strlen=33) [2024-09-13 13:02:15.844805] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11063832862446786581, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ROUTINE_REAL_AGENT"}, strlen=30) [2024-09-13 13:02:15.844841] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13312122887084384927, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLEGROUP_REAL_AGENT"}, strlen=33) [2024-09-13 13:02:15.844860] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14346287031741541231, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_FOREIGN_KEY_COLUMN_REAL_AGENT"}, strlen=41) [2024-09-13 13:02:15.844895] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=17075554883580067359, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_CONSTRAINT_REAL_AGENT"}, strlen=33) [2024-09-13 13:02:15.844936] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10840680912220602267, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TYPE_REAL_AGENT"}, strlen=27) [2024-09-13 13:02:15.844973] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=246093778082682985, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TYPE_ATTR_REAL_AGENT"}, strlen=32) [2024-09-13 13:02:15.845009] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13449921089785494825, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COLL_TYPE_REAL_AGENT"}, strlen=32) [2024-09-13 13:02:15.845058] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=6385208198904108737, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ROUTINE_PARAM_REAL_AGENT"}, strlen=36) [2024-09-13 13:02:15.845082] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9578908402156358501, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_KEYSTORE_REAL_AGENT"}, strlen=38) [2024-09-13 13:02:15.845101] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10476030892634719673, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OLS_POLICY_REAL_AGENT"}, strlen=40) [2024-09-13 13:02:15.845124] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17204695735239765947, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OLS_COMPONENT_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.845143] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=991306686467851251, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OLS_LABEL_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.845174] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4924428405412308721, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OLS_USER_LEVEL_REAL_AGENT"}, strlen=44) [2024-09-13 13:02:15.845193] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15472155165878582073, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TABLESPACE_REAL_AGENT"}, strlen=40) [2024-09-13 13:02:15.845221] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2663102168381264919, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_PROFILE_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.845241] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13290442952085111093, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_ROLE_GRANTEE_MAP_REAL_AGENT"}, strlen=46) [2024-09-13 13:02:15.845279] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10142726584912402277, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLE_PRIVILEGE_REAL_AGENT"}, strlen=38) [2024-09-13 13:02:15.845304] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=16165284582547397937, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SECURITY_AUDIT_REAL_AGENT"}, strlen=44) [2024-09-13 13:02:15.845368] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=1028789838245989911, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TRIGGER_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.845459] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=86976348190435307, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SECURITY_AUDIT_RECORD_REAL_AGENT"}, strlen=51) [2024-09-13 13:02:15.845476] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11295070627755566871, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SYSAUTH_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.845498] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4192266527385287191, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OBJAUTH_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.845527] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14220930887511194059, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_ERROR_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.845567] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15558168386859970891, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DEF_SUB_PART_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.845609] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12659027134157013807, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OBJECT_TYPE_REAL_AGENT"}, strlen=41) [2024-09-13 13:02:15.845662] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15539920791002881063, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DBLINK_REAL_AGENT"}, strlen=29) [2024-09-13 13:02:15.845680] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6721190787709688483, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_CONSTRAINT_COLUMN_REAL_AGENT"}, strlen=47) [2024-09-13 13:02:15.845724] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] set tenant space table name(key=1756739908089384057, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_DEPENDENCY_REAL_AGENT"}, strlen=40) [2024-09-13 13:02:15.845737] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=10359994543880308467, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TIME_ZONE_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.845753] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13273434282120161073, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TIME_ZONE_NAME_REAL_AGENT"}, strlen=44) [2024-09-13 13:02:15.845765] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] set tenant space table name(key=15639890227486043277, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TIME_ZONE_TRANSITION_REAL_AGENT"}, strlen=50) [2024-09-13 13:02:15.845784] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7451306385630835347, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_TIME_ZONE_TRANSITION_TYPE_REAL_AGENT"}, strlen=55) [2024-09-13 13:02:15.845800] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17298239633173097803, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RES_MGR_PLAN_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.845825] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=148013096825657785, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RES_MGR_DIRECTIVE_REAL_AGENT"}, strlen=40) [2024-09-13 13:02:15.845857] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=15467678308346716461, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANS_LOCK_STAT"}, strlen=27) [2024-09-13 13:02:15.845882] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4430042496122943931, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RES_MGR_MAPPING_RULE_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.845910] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4192386705164775943, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLET_ENCRYPT_INFO"}, strlen=31) [2024-09-13 13:02:15.845933] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=10539297998045853127, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RES_MGR_CONSUMER_GROUP_REAL_AGENT"}, strlen=45) [2024-09-13 13:02:15.845971] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9396186560485505931, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COLUMN_USAGE_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.846027] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=7078196939232024285, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_JOB_REAL_AGENT"}, strlen=26) [2024-09-13 13:02:15.846051] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=17144977774403110549, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_JOB_LOG_REAL_AGENT"}, strlen=30) [2024-09-13 13:02:15.846068] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5699000056158046259, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_DIRECTORY_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.846118] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18272392115128898399, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLE_STAT_REAL_AGENT"}, strlen=33) [2024-09-13 13:02:15.846177] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4790598570325686765, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COLUMN_STAT_REAL_AGENT"}, strlen=34) [2024-09-13 13:02:15.846215] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=14725530884049896855, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_HISTOGRAM_STAT_REAL_AGENT"}, strlen=37) [2024-09-13 13:02:15.846256] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] set tenant space table name(key=13577767256229248273, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_MEMORY_INFO"}, strlen=30) [2024-09-13 13:02:15.846288] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=16819701078805047747, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_SHOW_CREATE_TRIGGER"}, strlen=34) [2024-09-13 13:02:15.846332] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] set tenant space table name(key=11644622054002933117, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PX_TARGET_MONITOR"}, strlen=29) [2024-09-13 13:02:15.846363] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] set tenant space table name(key=8712776080271768179, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_MONITOR_MODIFIED_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.846410] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4293499955277641455, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLE_STAT_HISTORY_REAL_AGENT"}, strlen=41) [2024-09-13 13:02:15.846469] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6877895504831956541, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COLUMN_STAT_HISTORY_REAL_AGENT"}, strlen=42) [2024-09-13 13:02:15.846507] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17802118549550839751, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_HISTOGRAM_STAT_HISTORY_REAL_AGENT"}, strlen=45) [2024-09-13 13:02:15.846530] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1712033601437769403, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OPTSTAT_GLOBAL_PREFS_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.846551] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=15746188037927413039, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OPTSTAT_USER_PREFS_REAL_AGENT"}, strlen=41) [2024-09-13 13:02:15.846589] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=12181848944604648823, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DBLINK_INFO"}, strlen=23) [2024-09-13 13:02:15.846606] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10110635535888279475, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DAM_LAST_ARCH_TS_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.846627] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] set tenant space table name(key=10990113641447429811, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DAM_CLEANUP_JOBS_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.846716] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=2449482136152325627, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SCHEDULER_JOB_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.846756] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15035849274684193315, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SCHEDULER_PROGRAM_REAL_AGENT"}, strlen=47) [2024-09-13 13:02:15.846781] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9768154205153569941, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_CONTEXT_REAL_AGENT"}, strlen=30) [2024-09-13 13:02:15.846804] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13673186295376858265, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_GLOBAL_CONTEXT_VALUE"}, strlen=32) [2024-09-13 13:02:15.846868] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15638448585014192011, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANS_STAT"}, strlen=22) [2024-09-13 13:02:15.846928] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] set tenant space table name(key=16408700861286933223, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_UNIT"}, strlen=16) [2024-09-13 13:02:15.846962] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18183777021749945021, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_KVCACHE_INFO"}, strlen=24) [2024-09-13 13:02:15.846995] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18290794327221945553, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SERVER_COMPACTION_PROGRESS"}, strlen=38) [2024-09-13 13:02:15.847029] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8071610061324103569, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLET_COMPACTION_PROGRESS"}, strlen=38) [2024-09-13 13:02:15.847051] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2523595487200838693, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COMPACTION_DIAGNOSE_INFO"}, strlen=36) [2024-09-13 13:02:15.847079] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=14841205080070054545, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_COMPACTION_SUGGESTION"}, strlen=33) [2024-09-13 13:02:15.847131] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9627723801308780307, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLET_COMPACTION_HISTORY"}, strlen=37) [2024-09-13 13:02:15.847173] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4635858271486366655, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_META_TABLE"}, strlen=25) [2024-09-13 13:02:15.847200] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3808988812138977483, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLET_TO_LS_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.847226] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16110788540242951543, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLET_META_TABLE"}, strlen=29) [2024-09-13 13:02:15.847393] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17816722389201792165, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_CORE_ALL_TABLE"}, strlen=26) [2024-09-13 13:02:15.847466] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12283825564198937527, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_CORE_COLUMN_TABLE"}, strlen=29) [2024-09-13 13:02:15.847506] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=2277118550177062541, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DTL_INTERM_RESULT_MONITOR"}, strlen=37) [2024-09-13 13:02:15.847558] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4137663131567473185, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PROXY_SCHEMA"}, strlen=24) [2024-09-13 13:02:15.847602] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11156422085219437981, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PROXY_PARTITION"}, strlen=27) [2024-09-13 13:02:15.847745] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=4036760859619401965, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PROXY_PARTITION_INFO"}, strlen=32) [2024-09-13 13:02:15.847791] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=17829711576568634181, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PROXY_SUB_PARTITION"}, strlen=31) [2024-09-13 13:02:15.847820] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=2024660004820370095, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ZONE_MERGE_INFO"}, strlen=27) [2024-09-13 13:02:15.847850] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=13909650094468045473, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_MERGE_INFO"}, strlen=22) [2024-09-13 13:02:15.847913] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=1276426897780863929, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_OBJECT_TYPE_SYS_AGENT"}, strlen=40) [2024-09-13 13:02:15.847956] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=3248031559181960127, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_JOB"}, strlen=22) [2024-09-13 13:02:15.847998] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4858039470039679501, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_JOB_HISTORY"}, strlen=30) [2024-09-13 13:02:15.848055] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10223825026179765599, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_TASK"}, strlen=23) [2024-09-13 13:02:15.848111] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5462966642828940311, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_TASK_HISTORY"}, strlen=31) [2024-09-13 13:02:15.848185] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14932537271955496797, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_SET_FILES"}, strlen=28) [2024-09-13 13:02:15.848208] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=678122965871532353, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PLAN_BASELINE_REAL_AGENT"}, strlen=36) [2024-09-13 13:02:15.848249] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12643871327959592495, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PLAN_BASELINE_ITEM_REAL_AGENT"}, strlen=41) [2024-09-13 13:02:15.848266] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11086152559096392031, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SPM_CONFIG_REAL_AGENT"}, strlen=33) [2024-09-13 13:02:15.848291] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=157798098763440559, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_EVENT_NAME"}, strlen=25) [2024-09-13 13:02:15.848400] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=5445175508371669097, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ASH"}, strlen=15) [2024-09-13 13:02:15.848423] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16351743573684176731, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DML_STATS"}, strlen=21) [2024-09-13 13:02:15.848446] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=9390272764846486527, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOG_ARCHIVE_DEST_PARAMETER"}, strlen=38) [2024-09-13 13:02:15.848495] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15416499976402432373, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOG_ARCHIVE_PROGRESS"}, strlen=32) [2024-09-13 13:02:15.848536] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=86844084200832535, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOG_ARCHIVE_HISTORY"}, strlen=31) [2024-09-13 13:02:15.848579] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=4065098071906124039, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOG_ARCHIVE_PIECE_FILES"}, strlen=35) [2024-09-13 13:02:15.848619] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=15695282926027476951, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_LOG_ARCHIVE_PROGRESS"}, strlen=35) [2024-09-13 13:02:15.848635] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9967748469152860243, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_PARAMETER"}, strlen=28) [2024-09-13 13:02:15.848651] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6309767369305014977, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RESTORE_JOB"}, strlen=23) [2024-09-13 13:02:15.848729] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16635504735248940503, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RESTORE_JOB_HISTORY"}, strlen=31) [2024-09-13 13:02:15.848753] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14268757281890083997, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RESTORE_PROGRESS"}, strlen=28) [2024-09-13 13:02:15.848788] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=16537609085855683711, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_RESTORE_PROGRESS"}, strlen=31) [2024-09-13 13:02:15.848822] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11079986511842872525, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_RESTORE_HISTORY"}, strlen=30) [2024-09-13 13:02:15.848865] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5185636516300188181, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OUTLINE_REAL_AGENT"}, strlen=30) [2024-09-13 13:02:15.848925] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] set tenant space table name(key=674436003923346917, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OUTLINE_HISTORY_REAL_AGENT"}, strlen=38) [2024-09-13 13:02:15.848951] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=15208256332695772679, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_STORAGE_INFO"}, strlen=31) [2024-09-13 13:02:15.848980] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=4337658739498130951, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_STORAGE_INFO_HISTORY"}, strlen=39) [2024-09-13 13:02:15.849027] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=13188695452423712797, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_DELETE_JOB"}, strlen=29) [2024-09-13 13:02:15.849070] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12178999444158140051, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_DELETE_JOB_HISTORY"}, strlen=37) [2024-09-13 13:02:15.849112] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12279671715675426833, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_DELETE_TASK"}, strlen=30) [2024-09-13 13:02:15.849152] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5263895854425280477, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_DELETE_TASK_HISTORY"}, strlen=38) [2024-09-13 13:02:15.849176] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5348694771392498337, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BACKUP_DELETE_POLICY"}, strlen=32) [2024-09-13 13:02:15.849220] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2953143415765643941, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DEADLOCK_EVENT_HISTORY"}, strlen=34) [2024-09-13 13:02:15.849264] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7623637792856190687, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOG_STAT"}, strlen=20) [2024-09-13 13:02:15.849288] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18253412682607195109, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_REPLAY_STAT"}, strlen=23) [2024-09-13 13:02:15.849309] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=6221187855890335947, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_APPLY_STAT"}, strlen=22) [2024-09-13 13:02:15.849358] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=8679845022177989623, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ARCHIVE_STAT"}, strlen=24) [2024-09-13 13:02:15.849387] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=18379809772188019243, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_STATUS"}, strlen=21) [2024-09-13 13:02:15.849410] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16505727060290399023, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_RECOVERY_STAT"}, strlen=28) [2024-09-13 13:02:15.849427] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7239619031706282241, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_ELECTION_REFERENCE_INFO"}, strlen=38) [2024-09-13 13:02:15.849464] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] set tenant space table name(key=17761123436791173047, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_INFO"}, strlen=23) [2024-09-13 13:02:15.849481] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3544668226667192493, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_FREEZE_INFO_REAL_AGENT"}, strlen=34) [2024-09-13 13:02:15.849525] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] set tenant space table name(key=2217538040987206071, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_REPLICA_TASK"}, strlen=27) [2024-09-13 13:02:15.849560] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8464908778543838247, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_REPLICA_TASK_PLAN"}, strlen=32) [2024-09-13 13:02:15.849592] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=1196545763225513949, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SHOW_TRACE"}, strlen=22) [2024-09-13 13:02:15.849627] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=14591852108672397359, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DATABASE_PRIVILEGE_REAL_AGENT"}, strlen=41) [2024-09-13 13:02:15.849661] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=71795507584862111, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RLS_POLICY_REAL_AGENT"}, strlen=33) [2024-09-13 13:02:15.849677] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=4297127640430840957, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RLS_SECURITY_COLUMN_REAL_AGENT"}, strlen=42) [2024-09-13 13:02:15.849694] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3019571970520166377, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RLS_GROUP_REAL_AGENT"}, strlen=32) [2024-09-13 13:02:15.849723] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15895530468481482221, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RLS_CONTEXT_REAL_AGENT"}, strlen=34) [2024-09-13 13:02:15.849741] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16516293575796315009, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RLS_ATTRIBUTE_REAL_AGENT"}, strlen=36) [2024-09-13 13:02:15.849772] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14631205434474557883, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_REWRITE_RULES_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.849808] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10067653983357611025, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SYS_AGENT"}, strlen=28) [2024-09-13 13:02:15.849929] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3233587678844043343, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQL_PLAN"}, strlen=20) [2024-09-13 13:02:15.849982] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=4263650047599387153, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANS_SCHEDULER"}, strlen=27) [2024-09-13 13:02:15.850006] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10060310792005402831, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_ARB_REPLICA_TASK"}, strlen=31) [2024-09-13 13:02:15.850040] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=5635104523151064007, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_ARB_REPLICA_TASK_HISTORY"}, strlen=39) [2024-09-13 13:02:15.850060] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=4083433089237921071, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ARCHIVE_DEST_STATUS"}, strlen=31) [2024-09-13 13:02:15.850078] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10696786466268002045, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_EXTERNAL_TABLE_FILE_REAL_AGENT"}, strlen=42) [2024-09-13 13:02:15.850093] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=601976859838138119, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DATA_DICTIONARY_IN_LOG_REAL_AGENT"}, strlen=45) [2024-09-13 13:02:15.850122] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=3238705487434207041, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TASK_OPT_STAT_GATHER_HISTORY"}, strlen=40) [2024-09-13 13:02:15.850153] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5737830572743373211, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TABLE_OPT_STAT_GATHER_HISTORY"}, strlen=41) [2024-09-13 13:02:15.850189] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11155382981955778521, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OPT_STAT_GATHER_MONITOR"}, strlen=35) [2024-09-13 13:02:15.850218] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6343128485839888535, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LONG_OPS_STATUS_SYS_AGENT"}, strlen=37) [2024-09-13 13:02:15.850248] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6606725556978343807, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_THREAD"}, strlen=18) [2024-09-13 13:02:15.850328] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=18176318640519244947, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_ACTIVE_SESSION_HISTORY"}, strlen=37) [2024-09-13 13:02:15.850357] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] set tenant space table name(key=6643518632239972501, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_SNAPSHOT"}, strlen=23) [2024-09-13 13:02:15.850371] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12577008575161087731, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_STATNAME"}, strlen=23) [2024-09-13 13:02:15.850388] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=8666643332122291595, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_SYSSTAT"}, strlen=22) [2024-09-13 13:02:15.850414] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11908078501252541503, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ARBITRATION_MEMBER_INFO"}, strlen=35) [2024-09-13 13:02:15.850429] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1763406120398108033, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_ARBITRATION_SERVICE_STATUS"}, strlen=38) [2024-09-13 13:02:15.850473] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=15034942500908124765, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_OBJ_LOCK"}, strlen=20) [2024-09-13 13:02:15.850491] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11662224683202798413, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LOG_RESTORE_SOURCE"}, strlen=30) [2024-09-13 13:02:15.850515] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10039060784009973357, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BALANCE_JOB_REAL_AGENT"}, strlen=34) [2024-09-13 13:02:15.850541] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=11028506432573533373, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BALANCE_JOB_HISTORY_REAL_AGENT"}, strlen=42) [2024-09-13 13:02:15.850593] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=5723546348965405387, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BALANCE_TASK_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.850639] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant space table name(key=13613195856807076283, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_BALANCE_TASK_HISTORY_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.850682] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1067838024223800129, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANSFER_TASK_REAL_AGENT"}, strlen=36) [2024-09-13 13:02:15.850725] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=2534556678289591665, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANSFER_TASK_HISTORY_REAL_AGENT"}, strlen=44) [2024-09-13 13:02:15.850747] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3420978389336913163, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RESOURCE_POOL_SYS_AGENT"}, strlen=35) [2024-09-13 13:02:15.850770] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=9373951227016434535, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_PX_P2P_DATAHUB"}, strlen=26) [2024-09-13 13:02:15.850790] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14765604977710309311, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TIMESTAMP_SERVICE"}, strlen=29) [2024-09-13 13:02:15.850813] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11220192219686643171, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_LOG_RESTORE_STATUS"}, strlen=33) [2024-09-13 13:02:15.850846] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4509036766234885267, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_PARAMETER"}, strlen=28) [2024-09-13 13:02:15.850863] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7944264787665449213, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_DBMS_LOCK_ALLOCATED_REAL_AGENT"}, strlen=42) [2024-09-13 13:02:15.850893] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4299515142568801675, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_CONTROL"}, strlen=22) [2024-09-13 13:02:15.850944] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=76391472424956913, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_EVENT_HISTORY"}, strlen=32) [2024-09-13 13:02:15.850962] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1305233128437600335, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_REAL_AGENT"}, strlen=25) [2024-09-13 13:02:15.850982] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=15536432512756624177, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_FLT_CONFIG"}, strlen=22) [2024-09-13 13:02:15.851002] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5015684359745427269, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SCHEDULER_JOB_RUN_DETAIL_REAL_AGENT"}, strlen=54) [2024-09-13 13:02:15.851032] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=5610503288861208959, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TENANT_SCHEDULER_JOB_CLASS_REAL_AGENT"}, strlen=49) [2024-09-13 13:02:15.851112] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=14178799646159474029, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RECOVER_TABLE_JOB"}, strlen=29) [2024-09-13 13:02:15.851187] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2403710223539538387, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_RECOVER_TABLE_JOB_HISTORY"}, strlen=37) [2024-09-13 13:02:15.851263] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=16488600996134026947, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_IMPORT_TABLE_JOB"}, strlen=28) [2024-09-13 13:02:15.851336] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=9151659583519782361, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_IMPORT_TABLE_JOB_HISTORY"}, strlen=36) [2024-09-13 13:02:15.851412] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12942276155059857283, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_IMPORT_TABLE_TASK"}, strlen=29) [2024-09-13 13:02:15.851497] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=1561300714155895827, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_IMPORT_TABLE_TASK_HISTORY"}, strlen=37) [2024-09-13 13:02:15.851534] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set tenant space table name(key=2356102848589555039, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_INFO"}, strlen=19) [2024-09-13 13:02:15.851552] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=809105217149572395, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_CGROUP_CONFIG"}, strlen=25) [2024-09-13 13:02:15.851574] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=15156250950298074175, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_SYSTEM_EVENT"}, strlen=27) [2024-09-13 13:02:15.851595] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10817773853198966959, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_EVENT_NAME"}, strlen=25) [2024-09-13 13:02:15.851706] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=12234017200425496357, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SQLSTAT"}, strlen=19) [2024-09-13 13:02:15.851814] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=6032939173356268779, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_SQLSTAT"}, strlen=22) [2024-09-13 13:02:15.851831] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=4156837953389215731, table={database_id:201006, name_case_mode:2, table_name:"TENANT_VIRTUAL_STATNAME"}, strlen=23) [2024-09-13 13:02:15.851850] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=12088339948231708995, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_AUX_STAT_REAL_AGENT"}, strlen=31) [2024-09-13 13:02:15.851881] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=2872698207454910155, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SYS_VARIABLE_REAL_AGENT"}, strlen=35) [2024-09-13 13:02:15.851899] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set tenant space table name(key=11744517584771696973, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SYS_VARIABLE_DEFAULT_VALUE"}, strlen=38) [2024-09-13 13:02:15.851919] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] set tenant space table name(key=8569970128461830389, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANSFER_PARTITION_TASK_REAL_AGENT"}, strlen=46) [2024-09-13 13:02:15.851960] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=11352197425500656773, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRANSFER_PARTITION_TASK_HISTORY_REAL_AGENT"}, strlen=54) [2024-09-13 13:02:15.851977] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=3861796555220160705, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_WR_SQLTEXT"}, strlen=22) [2024-09-13 13:02:15.852020] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=10730272454303146739, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_INDEX_USAGE_INFO_REAL_AGENT"}, strlen=39) [2024-09-13 13:02:15.852069] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=5809969797973229023, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_LS_REPLICA_TASK_HISTORY"}, strlen=35) [2024-09-13 13:02:15.852099] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=7389858351665794671, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SESSION_PS_INFO"}, strlen=27) [2024-09-13 13:02:15.852121] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] set tenant space table name(key=15141104865829117807, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_TRACEPOINT_INFO"}, strlen=27) [2024-09-13 13:02:15.852140] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=17327722434898351333, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_USER_PROXY_INFO_REAL_AGENT"}, strlen=38) [2024-09-13 13:02:15.852157] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] set tenant space table name(key=1448878626267209979, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_USER_PROXY_ROLE_INFO_REAL_AGENT"}, strlen=43) [2024-09-13 13:02:15.852179] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] set tenant space table name(key=16844231160938440843, table={database_id:201006, name_case_mode:2, table_name:"ALL_VIRTUAL_SERVICE"}, strlen=19) [2024-09-13 13:02:15.853538] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0] set tenant space table name(key=617910211972326399, table={database_id:201002, name_case_mode:2, table_name:"ENGINES"}, strlen=7) [2024-09-13 13:02:15.853792] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0] set tenant space table name(key=16168277687360983359, table={database_id:201001, name_case_mode:2, table_name:"DBA_OB_LS_LOCATIONS"}, strlen=19) [2024-09-13 13:02:15.854088] INFO [SHARE.SCHEMA] init_sys_table_name_map (ob_schema_struct.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0] set tenant space table name(key=9862818242469715331, table={database_id:201001, name_case_mode:2, table_name:"DBA_TAB_MODIFICATIONS"}, strlen=21) [2024-09-13 13:02:15.863788] INFO [SHARE.SCHEMA] alloc_ (ob_schema_mem_mgr.cpp:115) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0] alloc schema mgr(tmp_ptr=0x2b07afa04060, tmp_ptr=0x2b07afa04060) [2024-09-13 13:02:15.864016] INFO [SHARE.SCHEMA] add_tenant (ob_schema_mgr.cpp:1165) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=32] add tenant schema(ret=0, tenant_id=1, tenant_schema={tenant_id:1, schema_version:1, tenant_name:"sys", name_case_mode:2, read_only:false, primary_zone:"", locality:"", previous_locality:"", compatibility_mode:-1, gmt_modified:0, drop_tenant_time:0, status:0, in_recyclebin:false, arbitration_service_status:{status:3}}) [2024-09-13 13:02:15.864063] INFO [SHARE.SCHEMA] add_sys_variable (ob_sys_variable_mgr.cpp:267) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=30] sys variable schema(*tmp_schema={tenant_id:1, schema_version:1, name_case_mode:2, read_only:false}) [2024-09-13 13:02:15.864211] INFO [SHARE.SCHEMA] add_table (ob_schema_mgr.cpp:2525) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=29] add __all_core_table schema(ret=0, ret="OB_SUCCESS", table_schema={tenant_id:1, database_id:201001, tablegroup_id:202001, table_id:1, association_table_id:18446744073709551615, in_offline_ddl_white_list:false, table_name:"__all_core_table", session_id:0, index_type:0, table_type:0, table_mode:{table_mode_flag:0, pk_mode:0, table_state_flag:0, view_created_method_flag:0, table_organization_mode:0, auto_increment_mode:0, rowid_mode:0, view_column_filled_flag:0}, tablespace_id:18446744073709551615, data_table_id:0, name_casemode:-1, schema_version:0, part_level:0, part_option:{part_func_type:0, part_func_expr:"", part_num:1, auto_part:false, auto_part_size:0}, sub_part_option:{part_func_type:0, part_func_expr:"", part_num:-1, auto_part:false, auto_part_size:0}, partition_num:0, def_subpartition_num:0, partition_array:null, def_subpartition_array:null, hidden_partition_array:null, index_status:1, duplicate_scope:0, encryption:"", encrypt_key:"", master_key_id:18446744073709551615, sub_part_template_flags:0, get_tablet_id():{id:1}, max_dependency_version:-1, object_status:1, is_force_view:false, truncate_version:-1}, lbt()="0x24edc06b 0x12b647b1 0x5e4a3c6 0x11e6c0e7 0x11e6e13c 0x1104ce79 0xb8def18 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:15.864333] INFO [SHARE.SCHEMA] init (ob_server_schema_service.cpp:214) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=77] init schema service(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:15.864351] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] Succ to register cache(cache_name="schema_cache", priority=1001, cache_id=0) [2024-09-13 13:02:15.864360] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] Succ to register cache(cache_name="schema_history_cache", priority=1001, cache_id=1) [2024-09-13 13:02:15.864365] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] Succ to register cache(cache_name="tablet_table_cache", priority=1001, cache_id=2) [2024-09-13 13:02:15.864418] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.864793] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19912][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=154618822656) [2024-09-13 13:02:15.864944] INFO register_pm (ob_page_manager.cpp:40) [19912][][T0][Y0-0000000000000000-0-0] [lt=42] register pm finish(ret=0, &pm=0x2b07aecd4340, pm.get_tid()=19912, tenant_id=500) [2024-09-13 13:02:15.864971] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19912][][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.865124] INFO [SHARE.SCHEMA] init (ob_schema_store.cpp:40) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] [SCHEMA_STORE] schema store init(tenant_id=1) [2024-09-13 13:02:15.866269] INFO [SHARE.SCHEMA] add_schema (ob_multi_version_schema_service.cpp:2199) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] add schema(tenant_id=1, refreshed_schema_version=1, new_schema_version=1) [2024-09-13 13:02:15.866296] INFO [SHARE.SCHEMA] alloc_ (ob_schema_mem_mgr.cpp:115) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] alloc schema mgr(tmp_ptr=0x2b07afa19530, tmp_ptr=0x2b07afa19530) [2024-09-13 13:02:15.866409] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:749) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] assign tenant_infos_ cost(ret=0, ret="OB_SUCCESS", cost=8) [2024-09-13 13:02:15.866463] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:756) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] assign user_infos_ cost(ret=0, ret="OB_SUCCESS", cost=9) [2024-09-13 13:02:15.866475] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:757) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] assign database_infos_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866481] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:758) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] assign database_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866492] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:759) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] assign tablegroup_infos_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866509] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:760) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] assign table_infos_ cost(ret=0, ret="OB_SUCCESS", cost=11) [2024-09-13 13:02:15.866523] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:761) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] assign index_infos_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866527] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:762) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] assign aux_vp_infos_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866533] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:763) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] assign lob_meta_infos_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866537] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:764) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] assign lob_piece_infos_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866546] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:765) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] assign drop_tenant_infos_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866553] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:766) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] assign table_id_map_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866558] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:767) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] assign table_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866568] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:768) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] assign index_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=2) [2024-09-13 13:02:15.866575] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:769) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] assign aux_vp_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866580] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:770) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] assign foreign_key_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866589] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:771) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] assign constraint_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=1) [2024-09-13 13:02:15.866593] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:772) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] assign hidden_table_name_map_ cost(ret=0, ret="OB_SUCCESS", cost=0) [2024-09-13 13:02:15.866639] INFO [SHARE.SCHEMA] assign (ob_schema_mgr.cpp:826) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ObSchemaMgr assign cost(ret=0, ret="OB_SUCCESS", cost=272) [2024-09-13 13:02:15.866652] INFO [SHARE.SCHEMA] put (ob_schema_mgr_cache.cpp:502) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] put schema mgr(schema version=1) [2024-09-13 13:02:15.866669] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:372) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14][errcode=-4201] failed to get tenant config(tenant_id=1, ret=-4201) [2024-09-13 13:02:15.866691] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.866699] INFO [SHARE.SCHEMA] put (ob_schema_mgr_cache.cpp:578) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] dst schema mgr item ptr(tenant_id=1, dst_item=0x2b07af84c030, dst_timestamp=1726203735866698, dst_schema_version=1, target_pos=0) [2024-09-13 13:02:15.866713] INFO [SHARE.SCHEMA] alloc_and_put_schema_mgr_ (ob_multi_version_schema_service.cpp:2285) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] put schema mgr succeed(schema_version=1, eliminated_schema_version=-1, tenant_id=1) [2024-09-13 13:02:15.866720] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:372) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6][errcode=-4201] failed to get tenant config(tenant_id=1, ret=-4201) [2024-09-13 13:02:15.866727] INFO [SHARE.SCHEMA] add_schema (ob_multi_version_schema_service.cpp:2238) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] [REFRESH_SCHEMA] change refreshed_schema_version with new mode(tenant_id=1, new_schema_version=1) [2024-09-13 13:02:15.866745] INFO [SHARE.SCHEMA] add_schema (ob_multi_version_schema_service.cpp:2245) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] finish add schema(ret=0, ret="OB_SUCCESS", tenant_id=1, new_schema_version=1, cost_ts=481) [2024-09-13 13:02:15.866760] INFO [SERVER] init (ob_srv_network_frame.cpp:120) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] io thread connection negotiation enabled! [2024-09-13 13:02:15.866776] INFO create_queue_thread (ob_srv_deliver.cpp:476) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id=27, tg_name=LeaseQueueTh) [2024-09-13 13:02:15.867056] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19913][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=158913789952) [2024-09-13 13:02:15.867105] INFO register_pm (ob_page_manager.cpp:40) [19913][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07aed52340, pm.get_tid()=19913, tenant_id=500) [2024-09-13 13:02:15.867124] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19913][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.867154] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19913][LeaseQueueTh0][T0][Y0-0000000000000000-0-0] [lt=9] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.867175] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19913][LeaseQueueTh0][T0][Y0-0000000000000000-0-0] [lt=17] Init thread local success [2024-09-13 13:02:15.867362] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19914][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=163208757248) [2024-09-13 13:02:15.867396] INFO register_pm (ob_page_manager.cpp:40) [19914][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07aedd0340, pm.get_tid()=19914, tenant_id=500) [2024-09-13 13:02:15.867418] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19914][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.867432] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19914][LeaseQueueTh1][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.867453] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19914][LeaseQueueTh1][T0][Y0-0000000000000000-0-0] [lt=20] Init thread local success [2024-09-13 13:02:15.867680] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19915][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=167503724544) [2024-09-13 13:02:15.867718] INFO register_pm (ob_page_manager.cpp:40) [19915][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07afc56340, pm.get_tid()=19915, tenant_id=500) [2024-09-13 13:02:15.867734] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19915][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.867752] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19915][LeaseQueueTh2][T0][Y0-0000000000000000-0-0] [lt=10] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.867760] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19915][LeaseQueueTh2][T0][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:15.867826] INFO create_queue_thread (ob_srv_deliver.cpp:476) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id=28, tg_name=DDLQueueTh) [2024-09-13 13:02:15.868053] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19916][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=171798691840) [2024-09-13 13:02:15.868119] INFO register_pm (ob_page_manager.cpp:40) [19916][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07afcd4340, pm.get_tid()=19916, tenant_id=500) [2024-09-13 13:02:15.868146] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19916][][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.868162] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19916][DDLQueueTh0][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.868171] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19916][DDLQueueTh0][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.868204] INFO create_queue_thread (ob_srv_deliver.cpp:476) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id=30, tg_name=DDLPQueueTh) [2024-09-13 13:02:15.868351] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19917][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=176093659136) [2024-09-13 13:02:15.868459] INFO register_pm (ob_page_manager.cpp:40) [19917][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07afd52340, pm.get_tid()=19917, tenant_id=500) [2024-09-13 13:02:15.868483] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19917][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.868501] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19917][DDLPQueueTh0][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.868510] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19917][DDLPQueueTh0][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.868707] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19918][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=180388626432) [2024-09-13 13:02:15.868798] INFO register_pm (ob_page_manager.cpp:40) [19918][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07afdd0340, pm.get_tid()=19918, tenant_id=500) [2024-09-13 13:02:15.868822] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19918][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.868832] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19918][DDLPQueueTh1][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.868840] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19918][DDLPQueueTh1][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.869024] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19919][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=184683593728) [2024-09-13 13:02:15.869104] INFO register_pm (ob_page_manager.cpp:40) [19919][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07afe56340, pm.get_tid()=19919, tenant_id=500) [2024-09-13 13:02:15.869122] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19919][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.869138] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19919][DDLPQueueTh2][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.869146] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19919][DDLPQueueTh2][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.869306] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19920][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=188978561024) [2024-09-13 13:02:15.869382] INFO register_pm (ob_page_manager.cpp:40) [19920][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07afed4340, pm.get_tid()=19920, tenant_id=500) [2024-09-13 13:02:15.869400] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19920][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.869416] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19920][DDLPQueueTh3][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.869424] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19920][DDLPQueueTh3][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.869486] INFO create_queue_thread (ob_srv_deliver.cpp:476) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id=29, tg_name=MysqlQueueTh) [2024-09-13 13:02:15.869669] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19921][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=193273528320) [2024-09-13 13:02:15.869751] INFO register_pm (ob_page_manager.cpp:40) [19921][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07aff52340, pm.get_tid()=19921, tenant_id=500) [2024-09-13 13:02:15.869767] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19921][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.869781] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19921][MysqlQueueTh0][T0][Y0-0000000000000000-0-0] [lt=7] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.869789] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19921][MysqlQueueTh0][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.869918] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19922][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=197568495616) [2024-09-13 13:02:15.869995] INFO register_pm (ob_page_manager.cpp:40) [19922][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07affd0340, pm.get_tid()=19922, tenant_id=500) [2024-09-13 13:02:15.870020] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19922][][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.870036] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19922][MysqlQueueTh1][T0][Y0-0000000000000000-0-0] [lt=8] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.870045] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19922][MysqlQueueTh1][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.870177] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19923][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=201863462912) [2024-09-13 13:02:15.870251] INFO register_pm (ob_page_manager.cpp:40) [19923][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07b0056340, pm.get_tid()=19923, tenant_id=500) [2024-09-13 13:02:15.870267] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19923][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.870277] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19923][MysqlQueueTh2][T0][Y0-0000000000000000-0-0] [lt=6] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.870285] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19923][MysqlQueueTh2][T0][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:15.870403] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19924][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=206158430208) [2024-09-13 13:02:15.870473] INFO register_pm (ob_page_manager.cpp:40) [19924][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07b00d4340, pm.get_tid()=19924, tenant_id=500) [2024-09-13 13:02:15.870496] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19924][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.870510] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19924][MysqlQueueTh3][T0][Y0-0000000000000000-0-0] [lt=10] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:15.870519] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [19924][MysqlQueueTh3][T0][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:15.870675] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19925][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=210453397504) [2024-09-13 13:02:15.870743] INFO register_pm (ob_page_manager.cpp:40) [19925][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07b0152340, pm.get_tid()=19925, tenant_id=500) [2024-09-13 13:02:15.870759] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19925][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.870772] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19925][MysqlQueueTh4][T0][Y0-0000000000000000-0-0] [lt=7] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.870910] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19926][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=214748364800) [2024-09-13 13:02:15.870972] INFO register_pm (ob_page_manager.cpp:40) [19926][][T0][Y0-0000000000000000-0-0] [lt=13] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.870990] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19926][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.871078] INFO create_queue_thread (ob_srv_deliver.cpp:476) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id=31, tg_name=DiagnoseQueueTh) [2024-09-13 13:02:15.871238] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19927][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=219043332096) [2024-09-13 13:02:15.871309] INFO register_pm (ob_page_manager.cpp:40) [19927][][T0][Y0-0000000000000000-0-0] [lt=16] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.871329] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19927][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.871513] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19928][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=223338299392) [2024-09-13 13:02:15.871595] INFO register_pm (ob_page_manager.cpp:40) [19928][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07b04d4340, pm.get_tid()=19928, tenant_id=500) [2024-09-13 13:02:15.871611] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19928][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.871625] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [19928][DiagnoseQueueTh][T0][Y0-0000000000000000-0-0] [lt=7] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:15.876463] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=65][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.911517] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=32][errcode=-4006] clock generator not inited [2024-09-13 13:02:15.977630] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.011603] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013522] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013547] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013554] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013559] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013565] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013570] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013576] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013581] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013586] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013592] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013597] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013602] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013608] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013613] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013618] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013623] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013629] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013634] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013640] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013645] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013650] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013656] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013661] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013666] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013672] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013677] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013682] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013688] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013693] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013698] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013704] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013709] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013714] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013719] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013725] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013730] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013736] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013741] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013747] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013752] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013757] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013762] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013768] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013773] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013778] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013783] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013789] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013794] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013799] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013804] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013810] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013816] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013821] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013826] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013831] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013837] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013842] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013847] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013853] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013858] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013863] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013869] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013883] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013889] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013894] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013899] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013905] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013910] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013915] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013921] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013926] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013931] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013937] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013942] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013947] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013953] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013958] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013963] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013968] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013974] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013979] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013985] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013990] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.013995] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014000] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014006] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014011] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014016] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014022] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014027] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014032] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014037] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014043] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014048] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014053] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.014058] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.014104] WDIAG [COMMON] get_all_tenant_id (ob_tenant_mgr.cpp:119) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=42][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.014142] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=0] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14649526682, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[]) [2024-09-13 13:02:16.014353] INFO print_current_status (ob_kvcache_hazard_version.cpp:441) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=1] [KVCACHE-HAZARD] hazard version status info: current version: 0 | min_version= 0 | total thread store count: 1 | total nodes count: 0 | [KVCACHE-HAZARD] i= 0 | thread_id= 19911 | inited= 1 | waiting_nodes_count= 0 | last_retire_version= 0 | acquired_version= 0 | [2024-09-13 13:02:16.057455] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=0] Cache replace map node details(ret=0, replace_node_count=0, replace_time=43975, replace_start_pos=0, replace_num=62914) [2024-09-13 13:02:16.057489] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=32] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:16.078804] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.091825] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:183) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=0] create eio success [2024-09-13 13:02:16.091912] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:186) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=22] create eio success(thread_num=3, lbt()=0x24edc06b 0x14e41533 0x14e445b9 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.091938] INFO [RPC.FRAME] init_rpc_eio_ (ob_net_easy.cpp:199) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] init rpc eio success(opts={rpc_io_cnt_:3, high_prio_rpc_io_cnt_:0, mysql_io_cnt_:3, batch_rpc_io_cnt_:3, use_ipv6_:false, tcp_user_timeout_:3000000, tcp_keepidle_:-1389934592, tcp_keepintvl_:6000000, tcp_keepcnt_:10, enable_tcp_keepalive_:1}, lbt()=0x24edc06b 0x14e41677 0x14e447b3 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.092664] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:183) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] create eio success [2024-09-13 13:02:16.092681] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:186) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] create eio success(thread_num=3, lbt()=0x24edc06b 0x14e41533 0x14e44a61 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.092697] INFO [RPC.FRAME] init_mysql_eio_ (ob_net_easy.cpp:213) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] init mysql eio success(opts={rpc_io_cnt_:3, high_prio_rpc_io_cnt_:0, mysql_io_cnt_:3, batch_rpc_io_cnt_:3, use_ipv6_:false, tcp_user_timeout_:3000000, tcp_keepidle_:-1389934592, tcp_keepintvl_:6000000, tcp_keepcnt_:10, enable_tcp_keepalive_:1}, lbt()=0x24edc06b 0x14e417bb 0x14e44c85 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.093253] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:183) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] create eio success [2024-09-13 13:02:16.093266] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:186) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] create eio success(thread_num=1, lbt()=0x24edc06b 0x14e41533 0x14e44cd0 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.093274] INFO [RPC.FRAME] init_mysql_eio_ (ob_net_easy.cpp:213) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] init mysql eio success(opts={rpc_io_cnt_:3, high_prio_rpc_io_cnt_:0, mysql_io_cnt_:3, batch_rpc_io_cnt_:3, use_ipv6_:false, tcp_user_timeout_:3000000, tcp_keepidle_:-1389934592, tcp_keepintvl_:6000000, tcp_keepcnt_:10, enable_tcp_keepalive_:1}, lbt()=0x24edc06b 0x14e417bb 0x14e44ef4 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.093882] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:183) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] create eio success [2024-09-13 13:02:16.093895] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:186) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] create eio success(thread_num=4, lbt()=0x24edc06b 0x14e41533 0x14e44f70 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.093908] INFO [RPC.FRAME] init_rpc_eio_ (ob_net_easy.cpp:199) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] init rpc eio success(opts={rpc_io_cnt_:3, high_prio_rpc_io_cnt_:0, mysql_io_cnt_:3, batch_rpc_io_cnt_:3, use_ipv6_:false, tcp_user_timeout_:3000000, tcp_keepidle_:-1389934592, tcp_keepintvl_:6000000, tcp_keepcnt_:10, enable_tcp_keepalive_:1}, lbt()=0x24edc06b 0x14e41677 0x14e45133 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.094513] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:183) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] create eio success [2024-09-13 13:02:16.094540] INFO [RPC.FRAME] create_eio_ (ob_net_easy.cpp:186) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] create eio success(thread_num=1, lbt()=0x24edc06b 0x14e41533 0x14e4517d 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.094557] INFO [RPC.FRAME] init_rpc_eio_ (ob_net_easy.cpp:199) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] init rpc eio success(opts={rpc_io_cnt_:3, high_prio_rpc_io_cnt_:0, mysql_io_cnt_:3, batch_rpc_io_cnt_:3, use_ipv6_:false, tcp_user_timeout_:3000000, tcp_keepidle_:-1389934592, tcp_keepintvl_:6000000, tcp_keepcnt_:10, enable_tcp_keepalive_:1}, lbt()=0x24edc06b 0x14e41677 0x14e45340 0xb5797bc 0xb8def6d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.094594] INFO [SERVER] reload_ssl_config (ob_srv_network_frame.cpp:519) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] finish reload_ssl_config, close ssl [2024-09-13 13:02:16.094683] INFO [RPC.FRAME] add_unix_listen_ (ob_net_easy.cpp:372) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] listen unix domain succ(path=unix:run/sql.sock) [2024-09-13 13:02:16.094715] INFO [RPC.FRAME] net_register_and_add_listen_ (ob_net_easy.cpp:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] listen start,(eio->magic=1321370295473772970) [2024-09-13 13:02:16.094740] INFO [RPC.FRAME] net_register_and_add_listen_ (ob_net_easy.cpp:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] listen start,(eio->magic=6427631941701759361) [2024-09-13 13:02:16.094787] INFO [RPC.OBRPC] set_pipefd_listen (ob_net_keepalive.cpp:233) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] set pipefd: 57 [2024-09-13 13:02:16.094923] INFO [RPC.FRAME] add_unix_listen_ (ob_net_easy.cpp:372) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] listen unix domain succ(path=unix:run/rpc.sock) [2024-09-13 13:02:16.094962] INFO ussl_eloop_regist (ussl_eloop.c:41) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] [ussl] sock regist: 0x55a3956a0300 fd=61 [2024-09-13 13:02:16.094972] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ob_pthread_create start [2024-09-13 13:02:16.095257] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19929][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=227633266688) [2024-09-13 13:02:16.095398] INFO register_pm (ob_page_manager.cpp:40) [19929][][T0][Y0-0000000000000000-0-0] [lt=28] register pm finish(ret=0, &pm=0x2b07b0552340, pm.get_tid()=19929, tenant_id=500) [2024-09-13 13:02:16.095426] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19929][][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.095481] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create succeed(thread=0x2b07a098fef0) [2024-09-13 13:02:16.095501] INFO ussl_init_bg_thread (ussl-loop.c:127) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] [ussl] create background thread success! [2024-09-13 13:02:16.095533] INFO ussl_eloop_regist (ussl_eloop.c:41) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] [ussl] sock regist: 0x55a3956a0290 fd=60 [2024-09-13 13:02:16.095542] INFO uloop_add_listen (ussl-loop.c:92) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [ussl] listen success, fd:60, port:2882 [2024-09-13 13:02:16.095550] INFO ussl_loop_add_listen (ussl-loop.c:235) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [ussl] uloop add listen success! port:2882 [2024-09-13 13:02:16.099913] INFO ussl_eloop_regist (ussl_eloop.c:41) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [ussl] sock regist: 0x55a3956a02c8 fd=64 [2024-09-13 13:02:16.099921] INFO uloop_add_listen (ussl-loop.c:92) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [ussl] listen success, fd:64, port:2882 [2024-09-13 13:02:16.099924] INFO ussl_loop_add_listen (ussl-loop.c:235) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] [ussl] uloop add listen success! port:2882 [2024-09-13 13:02:16.102548] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO sock regist: 0x2b07b0c040a8 fd=66 [2024-09-13 13:02:16.102565] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] PNIO sock regist: 0x2b07b0d846a8 fd=67 [2024-09-13 13:02:16.102570] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO sock regist: 0x2b07b0d84808 fd=68 [2024-09-13 13:02:16.102573] INFO timerfd_set_interval (timerfd.c:14) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=2] PNIO set interval: 8192 [2024-09-13 13:02:16.102692] INFO pktc_init (packet_client.c:113) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO pktc init succ [2024-09-13 13:02:16.102707] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] PNIO sock regist: 0x2b07b0c044d0 fd=71 [2024-09-13 13:02:16.102713] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] PNIO sock regist: 0x2b07b0c04450 fd=69 [2024-09-13 13:02:16.102719] INFO listenfd_init (listenfd.c:83) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] PNIO listen succ: 69 [2024-09-13 13:02:16.102734] INFO pkts_init (packet_server.c:80) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] PNIO pkts listen at "0.0.0.0:0" [2024-09-13 13:02:16.102982] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ob_pthread_create start [2024-09-13 13:02:16.103238] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19930][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=231928233984) [2024-09-13 13:02:16.103358] INFO register_pm (ob_page_manager.cpp:40) [19930][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07b05d0340, pm.get_tid()=19930, tenant_id=500) [2024-09-13 13:02:16.103384] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19930][][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.103400] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ob_pthread_create succeed(thread=0x2b07a092bd90) [2024-09-13 13:02:16.103415] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO [ratelimit] time: 1726203736103408, bytes: 0, bw: 0.000000 MB/s, add_ts: 1726203736103408, add_bytes: 0 [2024-09-13 13:02:16.106087] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] PNIO sock regist: 0x2b07b10040a8 fd=73 [2024-09-13 13:02:16.106101] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] PNIO sock regist: 0x2b07b11846a8 fd=74 [2024-09-13 13:02:16.106106] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO sock regist: 0x2b07b1184808 fd=75 [2024-09-13 13:02:16.106109] INFO timerfd_set_interval (timerfd.c:14) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=2] PNIO set interval: 8192 [2024-09-13 13:02:16.106231] INFO pktc_init (packet_client.c:113) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO pktc init succ [2024-09-13 13:02:16.106245] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] PNIO sock regist: 0x2b07b10044d0 fd=78 [2024-09-13 13:02:16.106254] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] PNIO sock regist: 0x2b07b1004450 fd=76 [2024-09-13 13:02:16.106257] INFO listenfd_init (listenfd.c:83) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO listen succ: 76 [2024-09-13 13:02:16.106265] INFO pkts_init (packet_server.c:80) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] PNIO pkts listen at "0.0.0.0:0" [2024-09-13 13:02:16.106521] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.106688] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19931][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=236223201280) [2024-09-13 13:02:16.106810] INFO register_pm (ob_page_manager.cpp:40) [19931][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07b1456340, pm.get_tid()=19931, tenant_id=500) [2024-09-13 13:02:16.106834] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19931][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.106853] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ob_pthread_create succeed(thread=0x2b07a092fed0) [2024-09-13 13:02:16.109816] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] PNIO sock regist: 0x2b07b16040a8 fd=80 [2024-09-13 13:02:16.109829] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] PNIO sock regist: 0x2b07b17846a8 fd=81 [2024-09-13 13:02:16.109834] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO sock regist: 0x2b07b1784808 fd=82 [2024-09-13 13:02:16.109842] INFO timerfd_set_interval (timerfd.c:14) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] PNIO set interval: 8192 [2024-09-13 13:02:16.109974] INFO pktc_init (packet_client.c:113) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO pktc init succ [2024-09-13 13:02:16.109989] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] PNIO sock regist: 0x2b07b16044d0 fd=85 [2024-09-13 13:02:16.109993] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO sock regist: 0x2b07b1604450 fd=83 [2024-09-13 13:02:16.109997] INFO listenfd_init (listenfd.c:83) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] PNIO listen succ: 83 [2024-09-13 13:02:16.110002] INFO pkts_init (packet_server.c:80) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] PNIO pkts listen at "0.0.0.0:0" [2024-09-13 13:02:16.110272] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ob_pthread_create start [2024-09-13 13:02:16.110504] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19932][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=240518168576) [2024-09-13 13:02:16.110623] INFO register_pm (ob_page_manager.cpp:40) [19932][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07b14d4340, pm.get_tid()=19932, tenant_id=500) [2024-09-13 13:02:16.110648] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19932][][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.110667] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ob_pthread_create succeed(thread=0x2b07a0935cb0) [2024-09-13 13:02:16.111694] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4006] clock generator not inited [2024-09-13 13:02:16.113468] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] PNIO sock regist: 0x2b07b1a040a8 fd=87 [2024-09-13 13:02:16.113481] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] PNIO sock regist: 0x2b07b1b846a8 fd=88 [2024-09-13 13:02:16.113485] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO sock regist: 0x2b07b1b84808 fd=89 [2024-09-13 13:02:16.113488] INFO timerfd_set_interval (timerfd.c:14) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO set interval: 8192 [2024-09-13 13:02:16.113621] INFO pktc_init (packet_client.c:113) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO pktc init succ [2024-09-13 13:02:16.113639] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] PNIO sock regist: 0x2b07b1a044d0 fd=92 [2024-09-13 13:02:16.113646] INFO eloop_regist (eloop.c:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] PNIO sock regist: 0x2b07b1a04450 fd=90 [2024-09-13 13:02:16.113649] INFO listenfd_init (listenfd.c:83) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] PNIO listen succ: 90 [2024-09-13 13:02:16.113661] INFO pkts_init (packet_server.c:80) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] PNIO pkts listen at "0.0.0.0:0" [2024-09-13 13:02:16.113924] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.114093] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19933][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=244813135872) [2024-09-13 13:02:16.114196] INFO register_pm (ob_page_manager.cpp:40) [19933][][T0][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07b1552340, pm.get_tid()=19933, tenant_id=500) [2024-09-13 13:02:16.114238] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19933][][T0][Y0-0000000000000000-0-0] [lt=39][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.114261] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ob_pthread_create succeed(thread=0x2b07a093fd90) [2024-09-13 13:02:16.114264] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=9] PNIO [ratelimit] time: 1726203736114263, bytes: 0, bw: 0.000000 MB/s, add_ts: 1726203736114263, add_bytes: 0 [2024-09-13 13:02:16.114285] INFO [SERVER] init (ob_srv_network_frame.cpp:170) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] poc rpc server start successfully [2024-09-13 13:02:16.114297] INFO [SERVER] init (ob_srv_network_frame.cpp:175) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] init rpc network frame successfully(ssl_client_authentication="False") [2024-09-13 13:02:16.114318] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114330] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114341] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114351] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114361] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114371] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114377] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114382] INFO [RPC] init (ob_batch_rpc.h:478) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] base thread init finished(is_hp_eio_enabled=false) [2024-09-13 13:02:16.114399] INFO init_network (ob_server.cpp:2386) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] start tg(lib::TGDefIDs::BRPC=25, tg_name=BRPC) [2024-09-13 13:02:16.114680] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19934][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=249108103168) [2024-09-13 13:02:16.114797] INFO register_pm (ob_page_manager.cpp:40) [19934][][T0][Y0-0000000000000000-0-0] [lt=40] register pm finish(ret=0, &pm=0x2b07b15d0340, pm.get_tid()=19934, tenant_id=500) [2024-09-13 13:02:16.114833] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19934][][T0][Y0-0000000000000000-0-0] [lt=33][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.115117] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19935][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=253403070464) [2024-09-13 13:02:16.115251] INFO register_pm (ob_page_manager.cpp:40) [19935][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07b1e56340, pm.get_tid()=19935, tenant_id=500) [2024-09-13 13:02:16.115279] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19935][][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.115524] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19936][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=257698037760) [2024-09-13 13:02:16.115616] INFO register_pm (ob_page_manager.cpp:40) [19936][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b1ed4340, pm.get_tid()=19936, tenant_id=500) [2024-09-13 13:02:16.115636] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19936][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.115891] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19937][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=261993005056) [2024-09-13 13:02:16.115963] INFO register_pm (ob_page_manager.cpp:40) [19937][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b1f52340, pm.get_tid()=19937, tenant_id=500) [2024-09-13 13:02:16.115999] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19937][][T0][Y0-0000000000000000-0-0] [lt=34][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.116189] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19938][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=266287972352) [2024-09-13 13:02:16.116304] INFO register_pm (ob_page_manager.cpp:40) [19938][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b1fd0340, pm.get_tid()=19938, tenant_id=500) [2024-09-13 13:02:16.116334] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19938][][T0][Y0-0000000000000000-0-0] [lt=27][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.116576] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19939][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=270582939648) [2024-09-13 13:02:16.116689] INFO register_pm (ob_page_manager.cpp:40) [19939][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07b2056340, pm.get_tid()=19939, tenant_id=500) [2024-09-13 13:02:16.116712] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19939][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.116983] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19940][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=274877906944) [2024-09-13 13:02:16.117109] INFO register_pm (ob_page_manager.cpp:40) [19940][][T0][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07b20d4340, pm.get_tid()=19940, tenant_id=500) [2024-09-13 13:02:16.117129] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19940][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.117362] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19941][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=279172874240) [2024-09-13 13:02:16.117482] INFO register_pm (ob_page_manager.cpp:40) [19941][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07b2152340, pm.get_tid()=19941, tenant_id=500) [2024-09-13 13:02:16.117511] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19941][][T0][Y0-0000000000000000-0-0] [lt=27][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.117538] INFO init (ob_rl_rpc.cpp:41) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObRLGetRegionBWCallback inited [2024-09-13 13:02:16.117552] INFO init_network (ob_server.cpp:2390) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] start tg(lib::TGDefIDs::RLMGR=26, tg_name=RLMGR) [2024-09-13 13:02:16.117749] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19942][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=283467841536) [2024-09-13 13:02:16.117816] INFO register_pm (ob_page_manager.cpp:40) [19942][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b21d0340, pm.get_tid()=19942, tenant_id=500) [2024-09-13 13:02:16.117834] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19942][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.125890] INFO [SHARE] init (ob_rs_mgr.cpp:229) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ObRsMgr init successfully! master rootserver(master_rs="172.16.51.35:2882") [2024-09-13 13:02:16.126053] INFO [SERVER] init (ob_service.cpp:219) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] [OBSERVICE_NOTICE] init ob_service begin [2024-09-13 13:02:16.126064] INFO init (ob_heartbeat.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(lib::TGDefIDs::ObHeartbeat=44, tg_name=ObHeartbeat) [2024-09-13 13:02:16.126308] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19943][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=287762808832) [2024-09-13 13:02:16.126470] INFO register_pm (ob_page_manager.cpp:40) [19943][][T0][Y0-0000000000000000-0-0] [lt=26] register pm finish(ret=0, &pm=0x2b07b2c56340, pm.get_tid()=19943, tenant_id=500) [2024-09-13 13:02:16.126495] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19943][][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.126571] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07968adad0, thread_id=19943, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb4ca529 0xb4685d3 0xb8e0375 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.126827] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19944][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=292057776128) [2024-09-13 13:02:16.126869] INFO run1 (ob_timer.cpp:361) [19943][][T0][Y0-0000000000000000-0-0] [lt=11] timer thread started(this=0x2b07968adad0, tid=19943, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.126916] INFO register_pm (ob_page_manager.cpp:40) [19944][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b2cd4340, pm.get_tid()=19944, tenant_id=500) [2024-09-13 13:02:16.126933] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19944][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.126954] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19944][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.127088] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19945][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=296352743424) [2024-09-13 13:02:16.127202] INFO register_pm (ob_page_manager.cpp:40) [19945][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07b2d52340, pm.get_tid()=19945, tenant_id=500) [2024-09-13 13:02:16.127223] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19945][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.127243] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19945][][T0][Y0-0000000000000000-0-0] [lt=14] UniqTaskQueue thread start [2024-09-13 13:02:16.128074] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19946][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=300647710720) [2024-09-13 13:02:16.128147] INFO register_pm (ob_page_manager.cpp:40) [19946][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07b2dd0340, pm.get_tid()=19946, tenant_id=500) [2024-09-13 13:02:16.128165] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19946][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.128184] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [19946][][T0][Y0-0000000000000000-0-0] [lt=9] dedup queue thread start(this=0x55a38b350a00) [2024-09-13 13:02:16.128216] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] init dedup-queue:(thread_num=1, queue_size=20480, task_map_size=20480, total_mem_limit=335544320, hold_mem_limit=167772160, page_size=7936, this=0x55a38b350a00, lbt="0x24edc06b 0x13820f43 0x13820411 0x1274ab62 0xb468615 0xb8e0375 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.128930] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19947][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=304942678016) [2024-09-13 13:02:16.129024] INFO register_pm (ob_page_manager.cpp:40) [19947][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07b3456340, pm.get_tid()=19947, tenant_id=500) [2024-09-13 13:02:16.129049] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19947][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.129061] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [19947][][T0][Y0-0000000000000000-0-0] [lt=9] dedup queue thread start(this=0x55a387450fc0) [2024-09-13 13:02:16.129073] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] init dedup-queue:(thread_num=1, queue_size=20480, task_map_size=20480, total_mem_limit=335544320, hold_mem_limit=167772160, page_size=7936, this=0x55a387450fc0, lbt="0x24edc06b 0x13820f43 0x13820411 0x1274ab62 0xb468650 0xb8e0375 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.129736] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19948][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=309237645312) [2024-09-13 13:02:16.129813] INFO register_pm (ob_page_manager.cpp:40) [19948][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07b34d4340, pm.get_tid()=19948, tenant_id=500) [2024-09-13 13:02:16.129833] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19948][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.129844] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [19948][][T0][Y0-0000000000000000-0-0] [lt=8] dedup queue thread start(this=0x55a3877d62c0) [2024-09-13 13:02:16.129852] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] init dedup-queue:(thread_num=1, queue_size=20480, task_map_size=20480, total_mem_limit=335544320, hold_mem_limit=167772160, page_size=7936, this=0x55a3877d62c0, lbt="0x24edc06b 0x13820f43 0x13820411 0x1274ab62 0xb4686e9 0xb8e0375 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.130344] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19949][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=313532612608) [2024-09-13 13:02:16.130470] INFO register_pm (ob_page_manager.cpp:40) [19949][][T0][Y0-0000000000000000-0-0] [lt=37] register pm finish(ret=0, &pm=0x2b07b3552340, pm.get_tid()=19949, tenant_id=500) [2024-09-13 13:02:16.130492] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19949][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.130513] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] init thread success(this=0x2b079e8050d0, id=1, ret=0) [2024-09-13 13:02:16.130539] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [19949][Occam][T0][Y0-0000000000000000-0-0] [lt=15] thread is running function [2024-09-13 13:02:16.130552] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0x1274ab9d 0xb4686e9 0xb8e0375 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.131135] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] TimeWheelBase inited success(precision=5000000, start_ticket=345240747, scan_ticket=345240747) [2024-09-13 13:02:16.131149] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] ObTimeWheel init success(precision=5000000, real_thread_num=1) [2024-09-13 13:02:16.131353] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19950][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=317827579904) [2024-09-13 13:02:16.131465] INFO register_pm (ob_page_manager.cpp:40) [19950][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07b35d0340, pm.get_tid()=19950, tenant_id=500) [2024-09-13 13:02:16.131508] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19950][][T0][Y0-0000000000000000-0-0] [lt=40][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.131534] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ObTimeWheel start success(timer_name="EventTimer") [2024-09-13 13:02:16.131548] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] init ObOccamTimer success(ret=0) [2024-09-13 13:02:16.132292] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19951][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=322122547200) [2024-09-13 13:02:16.132585] INFO register_pm (ob_page_manager.cpp:40) [19951][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07b3856340, pm.get_tid()=19951, tenant_id=500) [2024-09-13 13:02:16.132620] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19951][][T0][Y0-0000000000000000-0-0] [lt=32][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.132637] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [19951][][T0][Y0-0000000000000000-0-0] [lt=10] dedup queue thread start(this=0x55a38b486e00) [2024-09-13 13:02:16.132652] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] init dedup-queue:(thread_num=1, queue_size=20480, task_map_size=20480, total_mem_limit=335544320, hold_mem_limit=167772160, page_size=7936, this=0x55a38b486e00, lbt="0x24edc06b 0x13820f43 0x13820411 0x1274ab62 0xb46870f 0xb8e0375 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.132712] WDIAG [LIB] init (ob_tsc_timestamp.cpp:42) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11][errcode=-4007] invariant TSC not support(ret=-4007) [2024-09-13 13:02:16.133730] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=21] [ussl] sock regist: 0x2b0797f27e40 fd=93 [2024-09-13 13:02:16.133753] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=13] [ussl] accept new connection, fd:93, src_addr:172.16.51.38:48318 [2024-09-13 13:02:16.133992] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19952][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=326417514496) [2024-09-13 13:02:16.134119] INFO register_pm (ob_page_manager.cpp:40) [19952][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07b38d4340, pm.get_tid()=19952, tenant_id=500) [2024-09-13 13:02:16.134149] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19952][][T0][Y0-0000000000000000-0-0] [lt=27][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.134171] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19952][][T0][Y0-0000000000000000-0-0] [lt=9] UniqTaskQueue thread start [2024-09-13 13:02:16.134391] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19953][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=330712481792) [2024-09-13 13:02:16.134499] INFO register_pm (ob_page_manager.cpp:40) [19953][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b3952340, pm.get_tid()=19953, tenant_id=500) [2024-09-13 13:02:16.134517] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19953][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.134550] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19953][][T0][Y0-0000000000000000-0-0] [lt=25] UniqTaskQueue thread start [2024-09-13 13:02:16.134740] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19954][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=335007449088) [2024-09-13 13:02:16.134832] INFO register_pm (ob_page_manager.cpp:40) [19954][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b39d0340, pm.get_tid()=19954, tenant_id=500) [2024-09-13 13:02:16.134885] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19954][][T0][Y0-0000000000000000-0-0] [lt=32][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.134899] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19954][][T0][Y0-0000000000000000-0-0] [lt=12] UniqTaskQueue thread start [2024-09-13 13:02:16.135106] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19955][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=339302416384) [2024-09-13 13:02:16.135220] INFO register_pm (ob_page_manager.cpp:40) [19955][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b4056340, pm.get_tid()=19955, tenant_id=500) [2024-09-13 13:02:16.135241] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19955][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.135257] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19955][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.135442] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19956][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=343597383680) [2024-09-13 13:02:16.135530] INFO register_pm (ob_page_manager.cpp:40) [19956][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07b40d4340, pm.get_tid()=19956, tenant_id=500) [2024-09-13 13:02:16.135566] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19956][][T0][Y0-0000000000000000-0-0] [lt=33][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.135582] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19956][][T0][Y0-0000000000000000-0-0] [lt=10] UniqTaskQueue thread start [2024-09-13 13:02:16.135745] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19957][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=347892350976) [2024-09-13 13:02:16.135835] INFO register_pm (ob_page_manager.cpp:40) [19957][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b4152340, pm.get_tid()=19957, tenant_id=500) [2024-09-13 13:02:16.135855] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19957][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.135869] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19957][][T0][Y0-0000000000000000-0-0] [lt=9] UniqTaskQueue thread start [2024-09-13 13:02:16.136020] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19958][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=352187318272) [2024-09-13 13:02:16.136134] INFO register_pm (ob_page_manager.cpp:40) [19958][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b41d0340, pm.get_tid()=19958, tenant_id=500) [2024-09-13 13:02:16.136158] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19958][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.136173] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19958][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.136174] INFO [SERVER] init (ob_tablet_table_updater.cpp:206) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] init a ObTabletTableUpdater success [2024-09-13 13:02:16.136424] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19959][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=356482285568) [2024-09-13 13:02:16.136546] INFO register_pm (ob_page_manager.cpp:40) [19959][][T0][Y0-0000000000000000-0-0] [lt=45] register pm finish(ret=0, &pm=0x2b07b4256340, pm.get_tid()=19959, tenant_id=500) [2024-09-13 13:02:16.136567] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19959][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.136582] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19959][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.136798] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19960][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=360777252864) [2024-09-13 13:02:16.136897] INFO register_pm (ob_page_manager.cpp:40) [19960][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07b42d4340, pm.get_tid()=19960, tenant_id=500) [2024-09-13 13:02:16.136920] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19960][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.136931] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19960][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.137179] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19961][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=365072220160) [2024-09-13 13:02:16.137277] INFO register_pm (ob_page_manager.cpp:40) [19961][][T0][Y0-0000000000000000-0-0] [lt=34] register pm finish(ret=0, &pm=0x2b07b4352340, pm.get_tid()=19961, tenant_id=500) [2024-09-13 13:02:16.137304] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19961][][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.137316] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19961][][T0][Y0-0000000000000000-0-0] [lt=9] UniqTaskQueue thread start [2024-09-13 13:02:16.137509] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19962][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=369367187456) [2024-09-13 13:02:16.137623] INFO register_pm (ob_page_manager.cpp:40) [19962][][T0][Y0-0000000000000000-0-0] [lt=33] register pm finish(ret=0, &pm=0x2b07b43d0340, pm.get_tid()=19962, tenant_id=500) [2024-09-13 13:02:16.137643] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19962][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.137662] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19962][][T0][Y0-0000000000000000-0-0] [lt=9] UniqTaskQueue thread start [2024-09-13 13:02:16.137837] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19963][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=373662154752) [2024-09-13 13:02:16.137953] INFO register_pm (ob_page_manager.cpp:40) [19963][][T0][Y0-0000000000000000-0-0] [lt=56] register pm finish(ret=0, &pm=0x2b07b4456340, pm.get_tid()=19963, tenant_id=500) [2024-09-13 13:02:16.137977] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19963][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.137991] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19963][][T0][Y0-0000000000000000-0-0] [lt=9] UniqTaskQueue thread start [2024-09-13 13:02:16.138198] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19964][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=377957122048) [2024-09-13 13:02:16.138297] INFO register_pm (ob_page_manager.cpp:40) [19964][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b44d4340, pm.get_tid()=19964, tenant_id=500) [2024-09-13 13:02:16.138316] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19964][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.138333] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19964][][T0][Y0-0000000000000000-0-0] [lt=6] UniqTaskQueue thread start [2024-09-13 13:02:16.138511] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19965][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=382252089344) [2024-09-13 13:02:16.138598] INFO register_pm (ob_page_manager.cpp:40) [19965][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b4552340, pm.get_tid()=19965, tenant_id=500) [2024-09-13 13:02:16.138636] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19965][][T0][Y0-0000000000000000-0-0] [lt=31][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.138656] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19965][][T0][Y0-0000000000000000-0-0] [lt=16] UniqTaskQueue thread start [2024-09-13 13:02:16.138811] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19966][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=386547056640) [2024-09-13 13:02:16.138898] INFO register_pm (ob_page_manager.cpp:40) [19966][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07b45d0340, pm.get_tid()=19966, tenant_id=500) [2024-09-13 13:02:16.138915] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19966][][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.138921] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19966][][T0][Y0-0000000000000000-0-0] [lt=4] UniqTaskQueue thread start [2024-09-13 13:02:16.139068] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19967][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=390842023936) [2024-09-13 13:02:16.139158] INFO register_pm (ob_page_manager.cpp:40) [19967][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07b4656340, pm.get_tid()=19967, tenant_id=500) [2024-09-13 13:02:16.139186] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19967][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.139203] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19967][][T0][Y0-0000000000000000-0-0] [lt=12] UniqTaskQueue thread start [2024-09-13 13:02:16.139211] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=292, tg=0x2b07a0963df0, thread_cnt=1, tg->attr_={name:SvrMetaCh, type:3}) [2024-09-13 13:02:16.139228] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] create tg succeed(tg_id=293, tg=0x2b07a0965cb0, thread_cnt=1, tg->attr_={name:SvrMetaCh, type:3}) [2024-09-13 13:02:16.139239] INFO [SERVER] init (ob_service.cpp:261) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [OBSERVICE_NOTICE] init ob_service finish(ret=0, ret="OB_SUCCESS", inited=true) [2024-09-13 13:02:16.139443] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19968][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=395136991232) [2024-09-13 13:02:16.139519] INFO register_pm (ob_page_manager.cpp:40) [19968][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b46d4340, pm.get_tid()=19968, tenant_id=500) [2024-09-13 13:02:16.139538] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19968][][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.139583] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x55a386e0afd0, thread_id=19968, lbt()=0x24edc06b 0x13836960 0x14286508 0x998908b 0xb8e03de 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.139799] INFO run1 (ob_timer.cpp:361) [19968][][T0][Y0-0000000000000000-0-0] [lt=8] timer thread started(this=0x55a386e0afd0, tid=19968, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.139935] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19969][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=399431958528) [2024-09-13 13:02:16.139997] INFO register_pm (ob_page_manager.cpp:40) [19969][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07b4752340, pm.get_tid()=19969, tenant_id=500) [2024-09-13 13:02:16.140013] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19969][][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.140022] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19969][][T0][Y0-0000000000000000-0-0] [lt=4] new reentrant thread created(idx=0) [2024-09-13 13:02:16.140190] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19970][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=403726925824) [2024-09-13 13:02:16.140286] INFO register_pm (ob_page_manager.cpp:40) [19970][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b47d0340, pm.get_tid()=19970, tenant_id=500) [2024-09-13 13:02:16.140311] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19970][][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.140326] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19970][][T0][Y0-0000000000000000-0-0] [lt=8] new reentrant thread created(idx=1) [2024-09-13 13:02:16.140469] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19971][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=408021893120) [2024-09-13 13:02:16.140579] INFO register_pm (ob_page_manager.cpp:40) [19971][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b4856340, pm.get_tid()=19971, tenant_id=500) [2024-09-13 13:02:16.140603] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19971][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.140613] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19971][][T0][Y0-0000000000000000-0-0] [lt=8] new reentrant thread created(idx=2) [2024-09-13 13:02:16.140757] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19972][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=412316860416) [2024-09-13 13:02:16.140841] INFO register_pm (ob_page_manager.cpp:40) [19972][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07b48d4340, pm.get_tid()=19972, tenant_id=500) [2024-09-13 13:02:16.140888] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19972][][T0][Y0-0000000000000000-0-0] [lt=43][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.140911] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19972][][T0][Y0-0000000000000000-0-0] [lt=20] new reentrant thread created(idx=3) [2024-09-13 13:02:16.141060] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19973][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=416611827712) [2024-09-13 13:02:16.141126] INFO register_pm (ob_page_manager.cpp:40) [19973][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07b4952340, pm.get_tid()=19973, tenant_id=500) [2024-09-13 13:02:16.141143] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19973][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.141168] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObTimer create success(this=0x55a386e0bbd0, thread_id=19973, lbt()=0x24edc06b 0x13836960 0x14286508 0x99890bb 0xb8e03de 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.141514] INFO run1 (ob_timer.cpp:361) [19973][][T0][Y0-0000000000000000-0-0] [lt=8] timer thread started(this=0x55a386e0bbd0, tid=19973, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.141586] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19974][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=420906795008) [2024-09-13 13:02:16.141691] INFO register_pm (ob_page_manager.cpp:40) [19974][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07b49d0340, pm.get_tid()=19974, tenant_id=500) [2024-09-13 13:02:16.141714] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19974][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.141745] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19974][][T0][Y0-0000000000000000-0-0] [lt=21] new reentrant thread created(idx=0) [2024-09-13 13:02:16.141954] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19975][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=425201762304) [2024-09-13 13:02:16.142067] INFO register_pm (ob_page_manager.cpp:40) [19975][][T0][Y0-0000000000000000-0-0] [lt=39] register pm finish(ret=0, &pm=0x2b07b4a56340, pm.get_tid()=19975, tenant_id=500) [2024-09-13 13:02:16.142088] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19975][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.142105] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19975][][T0][Y0-0000000000000000-0-0] [lt=8] new reentrant thread created(idx=0) [2024-09-13 13:02:16.144377] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19976][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=429496729600) [2024-09-13 13:02:16.144483] INFO register_pm (ob_page_manager.cpp:40) [19976][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07b4ad4340, pm.get_tid()=19976, tenant_id=500) [2024-09-13 13:02:16.144523] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19976][][T0][Y0-0000000000000000-0-0] [lt=36][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.144548] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19976][][T0][Y0-0000000000000000-0-0] [lt=11] new reentrant thread created(idx=0) [2024-09-13 13:02:16.144748] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19977][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=433791696896) [2024-09-13 13:02:16.144858] INFO register_pm (ob_page_manager.cpp:40) [19977][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b4b52340, pm.get_tid()=19977, tenant_id=500) [2024-09-13 13:02:16.144903] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19977][][T0][Y0-0000000000000000-0-0] [lt=42][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.144923] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19977][][T0][Y0-0000000000000000-0-0] [lt=10] new reentrant thread created(idx=0) [2024-09-13 13:02:16.145153] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19978][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=438086664192) [2024-09-13 13:02:16.145261] INFO register_pm (ob_page_manager.cpp:40) [19978][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b4bd0340, pm.get_tid()=19978, tenant_id=500) [2024-09-13 13:02:16.145282] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19978][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.145293] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19978][][T0][Y0-0000000000000000-0-0] [lt=8] new reentrant thread created(idx=0) [2024-09-13 13:02:16.145505] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19979][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=442381631488) [2024-09-13 13:02:16.145614] INFO register_pm (ob_page_manager.cpp:40) [19979][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07b5256340, pm.get_tid()=19979, tenant_id=500) [2024-09-13 13:02:16.145638] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19979][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.145657] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19979][][T0][Y0-0000000000000000-0-0] [lt=9] new reentrant thread created(idx=0) [2024-09-13 13:02:16.146487] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19980][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=446676598784) [2024-09-13 13:02:16.146629] INFO register_pm (ob_page_manager.cpp:40) [19980][][T0][Y0-0000000000000000-0-0] [lt=44] register pm finish(ret=0, &pm=0x2b07b52d4340, pm.get_tid()=19980, tenant_id=500) [2024-09-13 13:02:16.146652] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19980][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.146685] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [19980][][T0][Y0-0000000000000000-0-0] [lt=9] dedup queue thread start(this=0x55a38744f740) [2024-09-13 13:02:16.146706] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] init dedup-queue:(thread_num=1, queue_size=20480, task_map_size=20480, total_mem_limit=335544320, hold_mem_limit=167772160, page_size=7936, this=0x55a38744f740, lbt="0x24edc06b 0x13820f43 0x13820411 0x1274ab62 0x9989a33 0xb8e03de 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.147183] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=294, tg=0x2b07a0981d90, thread_cnt=8, tg->attr_={name:DDLTaskExecutor3, type:2}) [2024-09-13 13:02:16.147377] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19981][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=450971566080) [2024-09-13 13:02:16.147499] INFO register_pm (ob_page_manager.cpp:40) [19981][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b5352340, pm.get_tid()=19981, tenant_id=500) [2024-09-13 13:02:16.147523] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19981][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.147543] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19981][][T0][Y0-0000000000000000-0-0] [lt=8] new reentrant thread created(idx=0) [2024-09-13 13:02:16.147777] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19982][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=455266533376) [2024-09-13 13:02:16.147907] INFO register_pm (ob_page_manager.cpp:40) [19982][][T0][Y0-0000000000000000-0-0] [lt=41] register pm finish(ret=0, &pm=0x2b07b53d0340, pm.get_tid()=19982, tenant_id=500) [2024-09-13 13:02:16.147931] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19982][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.147973] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] ObTimer create success(this=0x55a3876328c0, thread_id=19982, lbt()=0x24edc06b 0x13836960 0xab6c793 0x9989af1 0xb8e03de 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.148235] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19983][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=459561500672) [2024-09-13 13:02:16.148246] INFO run1 (ob_timer.cpp:361) [19982][][T0][Y0-0000000000000000-0-0] [lt=12] timer thread started(this=0x55a3876328c0, tid=19982, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.148346] INFO register_pm (ob_page_manager.cpp:40) [19983][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b5456340, pm.get_tid()=19983, tenant_id=500) [2024-09-13 13:02:16.148362] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19983][][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.148372] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] simple thread pool init success(name=unknown, thread_num=1, task_num_limit=1) [2024-09-13 13:02:16.148700] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19984][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=463856467968) [2024-09-13 13:02:16.148820] INFO register_pm (ob_page_manager.cpp:40) [19984][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b54d4340, pm.get_tid()=19984, tenant_id=500) [2024-09-13 13:02:16.148841] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19984][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.148853] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] simple thread pool init success(name=unknown, thread_num=1, task_num_limit=1) [2024-09-13 13:02:16.149152] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19985][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=468151435264) [2024-09-13 13:02:16.149261] INFO register_pm (ob_page_manager.cpp:40) [19985][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07b5552340, pm.get_tid()=19985, tenant_id=500) [2024-09-13 13:02:16.149285] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19985][][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.149296] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19985][][T0][Y0-0000000000000000-0-0] [lt=9] new reentrant thread created(idx=0) [2024-09-13 13:02:16.149580] INFO [SERVER] init_sql (ob_server.cpp:2510) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] init sql [2024-09-13 13:02:16.156237] INFO [SERVER] init_sql (ob_server.cpp:2519) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] init sql session mgr done [2024-09-13 13:02:16.156251] INFO [SERVER] init_sql (ob_server.cpp:2520) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] init sql location cache done [2024-09-13 13:02:16.156556] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19986][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=472446402560) [2024-09-13 13:02:16.156689] INFO register_pm (ob_page_manager.cpp:40) [19986][][T0][Y0-0000000000000000-0-0] [lt=33] register pm finish(ret=0, &pm=0x2b07b55d0340, pm.get_tid()=19986, tenant_id=500) [2024-09-13 13:02:16.156714] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19986][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.156730] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [19986][][T0][Y0-0000000000000000-0-0] [lt=9] new reentrant thread created(idx=0) [2024-09-13 13:02:16.156836] INFO [SERVER] init_sql (ob_server.cpp:2532) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] init sql engine done [2024-09-13 13:02:16.156859] INFO [SQL.EXE] run2 (ob_maintain_dependency_info_task.cpp:210) [19986][MaintainDepInfo][T0][Y0-0000000000000000-0-0] [lt=13] async task queue start [2024-09-13 13:02:16.156908] INFO [SQL.EXE] run2 (ob_maintain_dependency_info_task.cpp:227) [19986][MaintainDepInfo][T0][Y0-0000000000000000-0-0] [lt=0] [ASYNC TASK QUEUE](queue_.size()=0, sys_view_consistent_.size()=0) [2024-09-13 13:02:16.160592] INFO [LIB] init (ob_libxml2_sax_handler.cpp:116) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] saxhandler init(xmlIsMainThread()=1) [2024-09-13 13:02:16.160605] INFO [SERVER] init_sql (ob_server.cpp:2552) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] init sql done [2024-09-13 13:02:16.160624] INFO [SERVER] init_sql_runner (ob_server.cpp:2568) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] init sql runner done [2024-09-13 13:02:16.160839] INFO [SERVER] init_sequence (ob_server.cpp:2581) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] init sequence engine done [2024-09-13 13:02:16.160845] INFO [SERVER] init_pl (ob_server.cpp:2590) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] init pl [2024-09-13 13:02:16.162990] INFO [PL] compile_module (ob_llvm_helper.cpp:628) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ================Optimized LLVM Module================ [2024-09-13 13:02:16.163112] INFO [PL] dump_module (ob_llvm_helper.cpp:650) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] Dump LLVM Compile Module! (s.str().c_str()="; ModuleID = 'PL/SQL' source_filename = "PL/SQL" target datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128" define i64 @pl_init_func(i64 %0) { entry: ret i64 0 } !llvm.module.flags = !{!0} !0 = !{i32 2, !"Debug Info Version", i32 3} ") [2024-09-13 13:02:16.171935] INFO [SERVER] init_pl (ob_server.cpp:2594) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] init pl engine done [2024-09-13 13:02:16.172470] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19987][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=476741369856) [2024-09-13 13:02:16.172618] INFO register_pm (ob_page_manager.cpp:40) [19987][][T0][Y0-0000000000000000-0-0] [lt=26] register pm finish(ret=0, &pm=0x2b07b6456340, pm.get_tid()=19987, tenant_id=500) [2024-09-13 13:02:16.172647] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19987][][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.172670] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19987][][T0][Y0-0000000000000000-0-0] [lt=10] UniqTaskQueue thread start [2024-09-13 13:02:16.172919] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19988][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=481036337152) [2024-09-13 13:02:16.173063] INFO register_pm (ob_page_manager.cpp:40) [19988][][T0][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07b64d4340, pm.get_tid()=19988, tenant_id=500) [2024-09-13 13:02:16.173083] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19988][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.173100] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19988][][T0][Y0-0000000000000000-0-0] [lt=6] UniqTaskQueue thread start [2024-09-13 13:02:16.173397] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19989][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=485331304448) [2024-09-13 13:02:16.173521] INFO register_pm (ob_page_manager.cpp:40) [19989][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b6552340, pm.get_tid()=19989, tenant_id=500) [2024-09-13 13:02:16.173547] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19989][][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.173566] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19989][][T0][Y0-0000000000000000-0-0] [lt=13] UniqTaskQueue thread start [2024-09-13 13:02:16.173835] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19990][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=489626271744) [2024-09-13 13:02:16.173947] INFO register_pm (ob_page_manager.cpp:40) [19990][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b65d0340, pm.get_tid()=19990, tenant_id=500) [2024-09-13 13:02:16.173970] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19990][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.173997] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19990][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.174245] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19991][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=493921239040) [2024-09-13 13:02:16.174356] INFO register_pm (ob_page_manager.cpp:40) [19991][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b6656340, pm.get_tid()=19991, tenant_id=500) [2024-09-13 13:02:16.174379] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19991][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.174395] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19991][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.174636] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19992][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=498216206336) [2024-09-13 13:02:16.174740] INFO register_pm (ob_page_manager.cpp:40) [19992][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07b66d4340, pm.get_tid()=19992, tenant_id=500) [2024-09-13 13:02:16.174755] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19992][][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.174774] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19992][][T0][Y0-0000000000000000-0-0] [lt=5] UniqTaskQueue thread start [2024-09-13 13:02:16.175111] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19993][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=502511173632) [2024-09-13 13:02:16.175253] INFO register_pm (ob_page_manager.cpp:40) [19993][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b6752340, pm.get_tid()=19993, tenant_id=500) [2024-09-13 13:02:16.175275] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19993][][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.175289] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19993][][T0][Y0-0000000000000000-0-0] [lt=8] UniqTaskQueue thread start [2024-09-13 13:02:16.175519] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19994][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=506806140928) [2024-09-13 13:02:16.175609] INFO register_pm (ob_page_manager.cpp:40) [19994][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b67d0340, pm.get_tid()=19994, tenant_id=500) [2024-09-13 13:02:16.175630] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19994][][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.175647] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19994][][T0][Y0-0000000000000000-0-0] [lt=12] UniqTaskQueue thread start [2024-09-13 13:02:16.175818] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19995][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=511101108224) [2024-09-13 13:02:16.175916] INFO register_pm (ob_page_manager.cpp:40) [19995][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b6856340, pm.get_tid()=19995, tenant_id=500) [2024-09-13 13:02:16.175943] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19995][][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.176005] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] ObTimer create success(this=0x55a386aed360, thread_id=19995, lbt()=0x24edc06b 0x13836960 0x119a051f 0x119a00dd 0xb8e0811 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.176241] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19996][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=515396075520) [2024-09-13 13:02:16.176353] INFO run1 (ob_timer.cpp:361) [19995][][T0][Y0-0000000000000000-0-0] [lt=11] timer thread started(this=0x55a386aed360, tid=19995, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.176364] INFO register_pm (ob_page_manager.cpp:40) [19996][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07b68d4340, pm.get_tid()=19996, tenant_id=500) [2024-09-13 13:02:16.176380] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19996][][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.176412] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] ObTimer create success(this=0x55a386aed460, thread_id=19996, lbt()=0x24edc06b 0x13836960 0x119a057c 0x119a00dd 0xb8e0811 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.176662] INFO run1 (ob_timer.cpp:361) [19996][][T0][Y0-0000000000000000-0-0] [lt=6] timer thread started(this=0x55a386aed460, tid=19996, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.176699] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19997][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=519691042816) [2024-09-13 13:02:16.176851] INFO register_pm (ob_page_manager.cpp:40) [19997][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b6952340, pm.get_tid()=19997, tenant_id=500) [2024-09-13 13:02:16.176883] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19997][][T0][Y0-0000000000000000-0-0] [lt=31][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.176916] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ObTimer create success(this=0x55a386aed560, thread_id=19997, lbt()=0x24edc06b 0x13836960 0x119a05d9 0x119a00dd 0xb8e0811 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.177118] INFO run1 (ob_timer.cpp:361) [19997][][T0][Y0-0000000000000000-0-0] [lt=15] timer thread started(this=0x55a386aed560, tid=19997, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.179387] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19998][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=523986010112) [2024-09-13 13:02:16.179536] INFO register_pm (ob_page_manager.cpp:40) [19998][][T0][Y0-0000000000000000-0-0] [lt=40] register pm finish(ret=0, &pm=0x2b07b69d0340, pm.get_tid()=19998, tenant_id=500) [2024-09-13 13:02:16.179561] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19998][][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.179588] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19998][][T0][Y0-0000000000000000-0-0] [lt=10] UniqTaskQueue thread start [2024-09-13 13:02:16.179733] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [19999][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=528280977408) [2024-09-13 13:02:16.179803] INFO register_pm (ob_page_manager.cpp:40) [19999][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b6e56340, pm.get_tid()=19999, tenant_id=500) [2024-09-13 13:02:16.179830] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [19999][][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.179899] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [19999][][T0][Y0-0000000000000000-0-0] [lt=37] UniqTaskQueue thread start [2024-09-13 13:02:16.180036] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20000][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=532575944704) [2024-09-13 13:02:16.180116] INFO register_pm (ob_page_manager.cpp:40) [20000][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b6ed4340, pm.get_tid()=20000, tenant_id=500) [2024-09-13 13:02:16.180139] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [20000][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.180477] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20000][][T0][Y0-0000000000000000-0-0] [lt=325] new reentrant thread created(idx=0) [2024-09-13 13:02:16.180504] INFO [SHARE.LOCATION] init (ob_tablet_location_refresh_service.cpp:300) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [REFRESH_TABLET_LOCATION] init service(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:16.181782] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20001][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=536870912000) [2024-09-13 13:02:16.181902] INFO register_pm (ob_page_manager.cpp:40) [20001][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07b6f52340, pm.get_tid()=20001, tenant_id=500) [2024-09-13 13:02:16.181928] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [20001][][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.181963] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [20001][][T0][Y0-0000000000000000-0-0] [lt=10] UniqTaskQueue thread start [2024-09-13 13:02:16.183282] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20002][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=541165879296) [2024-09-13 13:02:16.183388] INFO register_pm (ob_page_manager.cpp:40) [20002][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07b6fd0340, pm.get_tid()=20002, tenant_id=500) [2024-09-13 13:02:16.183410] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [20002][][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.183451] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [20002][][T0][Y0-0000000000000000-0-0] [lt=9] UniqTaskQueue thread start [2024-09-13 13:02:16.183454] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] Succ to register cache(cache_name="vtable_cache", priority=1000, cache_id=3) [2024-09-13 13:02:16.183643] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20003][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=545460846592) [2024-09-13 13:02:16.183724] INFO register_pm (ob_page_manager.cpp:40) [20003][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07b7656340, pm.get_tid()=20003, tenant_id=500) [2024-09-13 13:02:16.183760] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [20003][][T0][Y0-0000000000000000-0-0] [lt=33][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.183788] INFO [SERVER] run1 (ob_uniq_task_queue.h:339) [20003][][T0][Y0-0000000000000000-0-0] [lt=15] UniqTaskQueue thread start [2024-09-13 13:02:16.184108] INFO [SHARE] init (ob_gais_rpc.cpp:200) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] gais request rpc inited success(this=0x55a38b77def0, self="172.16.51.35:2882") [2024-09-13 13:02:16.184128] INFO [SHARE] init (ob_gais_client.cpp:48) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] gais client init success(self="172.16.51.35:2882", this=0x55a38b77d8c0) [2024-09-13 13:02:16.184141] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=7936, isize_=576, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:16.184148] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ObSliceAlloc init finished(bsize_=7936, isize_=64, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:16.186103] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ObSliceAlloc init finished(bsize_=7936, isize_=384, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:16.186237] WDIAG [SERVER] get_network_speed_from_config_file (ob_server.cpp:2894) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11][errcode=-4027] NIC Config file doesn't exist, auto detecting(nic_rate_path="etc/nic.rate.config", ret=-4027, ret="OB_FILE_NOT_EXIST") [2024-09-13 13:02:16.186304] WDIAG load_file_to_string (utility.h:662) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:16.186317] WDIAG get_ethernet_speed (utility.cpp:580) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:16.186335] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:16.186346] INFO [COMMON] init (utility.cpp:1342) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] init bandwidth(rate_=78643200, comment_=in) [2024-09-13 13:02:16.186356] INFO [COMMON] init (utility.cpp:1342) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] init bandwidth(rate_=78643200, comment_=out) [2024-09-13 13:02:16.186363] INFO [SERVER] init_bandwidth_throttle (ob_server.cpp:2974) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] succeed to init_bandwidth_throttle(sys_bkgd_net_percentage_=60, network_speed=131072000, rate=78643200) [2024-09-13 13:02:16.186613] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20004][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=549755813888) [2024-09-13 13:02:16.186704] INFO register_pm (ob_page_manager.cpp:40) [20004][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07b76d4340, pm.get_tid()=20004, tenant_id=500) [2024-09-13 13:02:16.186752] WDIAG [STORAGE.TRANS] getClock (ob_clock_generator.h:70) [20004][][T0][Y0-0000000000000000-0-0] [lt=44][errcode=-4006] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:16.186771] INFO [STORAGE.TRANS] init (ob_clock_generator.cpp:56) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] clock generator inited success [2024-09-13 13:02:16.186805] INFO [SERVER] init_storage (ob_server.cpp:2729) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] clog dir is empty [2024-09-13 13:02:16.186861] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] Succ to register cache(cache_name="index_block_cache", priority=10, cache_id=4) [2024-09-13 13:02:16.186887] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] Succ to register cache(cache_name="user_block_cache", priority=1, cache_id=5) [2024-09-13 13:02:16.186897] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Succ to register cache(cache_name="user_row_cache", priority=1, cache_id=6) [2024-09-13 13:02:16.186908] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Succ to register cache(cache_name="bf_cache", priority=1, cache_id=7) [2024-09-13 13:02:16.214646] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=0] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14511683994, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[]) [2024-09-13 13:02:16.221145] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Succ to register cache(cache_name="fuse_row_cache", priority=1, cache_id=8) [2024-09-13 13:02:16.221178] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=22] Succ to register cache(cache_name="storage_meta_cache", priority=10, cache_id=9) [2024-09-13 13:02:16.221375] INFO [STORAGE] init (ob_resource_map.h:265) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] init resource map success(ret=0, attr=tenant_id=500, label=TmpFileManager, ctx_id=0, prio=0, bkt_num=12289) [2024-09-13 13:02:16.221410] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] Succ to register cache(cache_name="tmp_page_cache", priority=1, cache_id=10) [2024-09-13 13:02:16.221420] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] Succ to register cache(cache_name="tmp_block_cache", priority=1, cache_id=11) [2024-09-13 13:02:16.221732] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20005][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=554050781184) [2024-09-13 13:02:16.221937] INFO register_pm (ob_page_manager.cpp:40) [20005][][T0][Y0-0000000000000000-0-0] [lt=43] register pm finish(ret=0, &pm=0x2b07b7752340, pm.get_tid()=20005, tenant_id=500) [2024-09-13 13:02:16.222036] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ObTimer create success(this=0x55a387b4e4b0, thread_id=20005, lbt()=0x24edc06b 0x13836960 0xfd7f610 0xb8e0c4d 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.222267] INFO [STORAGE] init (ob_disk_usage_reporter.cpp:60) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] ObDistUsageReportTask init successful(ret=0) [2024-09-13 13:02:16.222280] INFO init_storage (ob_server.cpp:2772) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] start tg(lib::TGDefIDs::DiskUseReport=56, tg_name=DiskUseReport) [2024-09-13 13:02:16.222388] INFO run1 (ob_timer.cpp:361) [20005][][T0][Y0-0000000000000000-0-0] [lt=27] timer thread started(this=0x55a387b4e4b0, tid=20005, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.222496] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20006][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=558345748480) [2024-09-13 13:02:16.222615] INFO register_pm (ob_page_manager.cpp:40) [20006][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07b77d0340, pm.get_tid()=20006, tenant_id=500) [2024-09-13 13:02:16.222687] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ObTimer create success(this=0x2b0796873e60, thread_id=20006, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb8e0d07 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.223902] INFO [STORAGE] init (ob_ddl_redo_log_writer.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] succeed to init ObDDLCtrlSpeedHandle(ret=0) [2024-09-13 13:02:16.223916] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] Succ to register cache(cache_name="tx_data_kv_cache", priority=2, cache_id=12) [2024-09-13 13:02:16.224235] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20007][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=562640715776) [2024-09-13 13:02:16.224326] INFO run1 (ob_timer.cpp:361) [20006][][T0][Y0-0000000000000000-0-0] [lt=18] timer thread started(this=0x2b0796873e60, tid=20006, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.224489] INFO register_pm (ob_page_manager.cpp:40) [20007][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07b9e56340, pm.get_tid()=20007, tenant_id=500) [2024-09-13 13:02:16.224551] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] init dedup-queue:(thread_num=1, queue_size=5, task_map_size=5, total_mem_limit=1073741824, hold_mem_limit=536870912, page_size=65408, this=0x55a386e17480, lbt="0x24edc06b 0x13820f43 0x13820411 0x1092aeda 0xb8e0e28 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.224526] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [20007][][T0][Y0-0000000000000000-0-0] [lt=28] dedup queue thread start(this=0x55a386e17480) [2024-09-13 13:02:16.224575] INFO init (ob_locality_manager.cpp:85) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] start tg(lib::TGDefIDs::LocalityReload=54, tg_name=LocalityReload) [2024-09-13 13:02:16.224807] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20008][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=566935683072) [2024-09-13 13:02:16.224930] INFO register_pm (ob_page_manager.cpp:40) [20008][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b9ed4340, pm.get_tid()=20008, tenant_id=500) [2024-09-13 13:02:16.224972] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObTimer create success(this=0x2b079686fe60, thread_id=20008, lbt()=0x24edc06b 0x13836960 0x115a4182 0x1092afed 0xb8e0e28 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.225156] INFO [STORAGE.TRANS] init (ob_location_adapter.cpp:46) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ob location cache adapter inited success [2024-09-13 13:02:16.225170] INFO [STORAGE.TRANS] alloc (ob_trans_factory.cpp:265) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] trans factory statistics(object_name="ObGtsRpcProxy", label="ObModIds::OB_GTS_RPC_PROXY", alloc_count=0, release_count=0, used=0) [2024-09-13 13:02:16.225192] INFO [STORAGE.TRANS] alloc (ob_trans_factory.cpp:266) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=20] trans factory statistics(object_name="ObGtsRequestRpc", label="ObModIds::OB_GTS_REQUEST_RPC", alloc_count=0, release_count=0, used=0) [2024-09-13 13:02:16.225234] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] create tg succeed(tg_id=295, tg=0x2b07b6dfdc30, thread_cnt=1, tg->attr_={name:TSWorker, type:4}) [2024-09-13 13:02:16.225265] INFO run1 (ob_timer.cpp:361) [20008][][T0][Y0-0000000000000000-0-0] [lt=25] timer thread started(this=0x2b079686fe60, tid=20008, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.225544] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20009][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=571230650368) [2024-09-13 13:02:16.225620] INFO register_pm (ob_page_manager.cpp:40) [20009][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07b9f52340, pm.get_tid()=20009, tenant_id=500) [2024-09-13 13:02:16.225641] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] simple thread pool init success(name=TSWorker, thread_num=1, task_num_limit=10240) [2024-09-13 13:02:16.225651] INFO init (ob_ts_worker.cpp:40) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=295, tg_name=TSWorker) [2024-09-13 13:02:16.225661] INFO [STORAGE.TRANS] init (ob_ts_worker.cpp:43) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ts worker thread pool init success [2024-09-13 13:02:16.225667] INFO [STORAGE.TRANS] init (ob_ts_worker.cpp:52) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ts worker init success(this=0x55a387b60128, ts_mgr=0x55a387b5fd80, use_local_worker=true) [2024-09-13 13:02:16.225682] INFO [STORAGE.TRANS] init (ob_gts_rpc.cpp:138) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] gts request rpc inited success(this=0x2b07b6dfd5c0, self="172.16.51.35:2882", ts_mgr=0x55a387b5fd80) [2024-09-13 13:02:16.226750] INFO [STORAGE.TRANS] init (ob_ts_mgr.cpp:359) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ObTsMgr inited success(this=0x55a387b5fd80, server="172.16.51.35:2882") [2024-09-13 13:02:16.226766] INFO [SERVER] init_ts_mgr (ob_server.cpp:2694) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] gts cache mgr init success [2024-09-13 13:02:16.226776] INFO [STORAGE.TRANS] init (ob_weak_read_service.cpp:47) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] [WRS] weak read service init succ [2024-09-13 13:02:16.226783] INFO [STORAGE.TRANS] init (ob_black_list.cpp:41) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] BLService init success(*this={is_inited:true, is_running:false}) [2024-09-13 13:02:16.226931] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] Succ to register cache(cache_name="external_table_file_cache", priority=1, cache_id=13) [2024-09-13 13:02:16.232547] INFO [STORAGE.REDO] init (ob_storage_log_writer.cpp:103) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] Successfully init slog writer(ret=0, log_dir=0x55a387d0cbc0, log_file_size=67108864, max_log_size=8192, log_file_spec={retry_write_policy:"normal", log_create_policy:"normal", log_write_policy:"truncate"}) [2024-09-13 13:02:16.232576] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=22] ob_pthread_create start [2024-09-13 13:02:16.232959] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20010][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=575525617664) [2024-09-13 13:02:16.233075] INFO register_pm (ob_page_manager.cpp:40) [20010][][T0][Y0-0000000000000000-0-0] [lt=29] register pm finish(ret=0, &pm=0x2b07b9fd0340, pm.get_tid()=20010, tenant_id=500) [2024-09-13 13:02:16.233108] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ob_pthread_create succeed(thread=0x2b07b56bfe70) [2024-09-13 13:02:16.233137] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=296, tg=0x2b07b570fd80, thread_cnt=8, tg->attr_={name:SvrStartupHandler, type:4}) [2024-09-13 13:02:16.233306] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20011][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=579820584960) [2024-09-13 13:02:16.233403] INFO register_pm (ob_page_manager.cpp:40) [20011][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07bb056340, pm.get_tid()=20011, tenant_id=500) [2024-09-13 13:02:16.233509] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] ObTimer create success(this=0x55a387b27e70, thread_id=20011, lbt()=0x24edc06b 0x13836960 0xf83a703 0xb8e1070 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.233693] INFO run1 (ob_timer.cpp:361) [20011][][T0][Y0-0000000000000000-0-0] [lt=45] timer thread started(this=0x55a387b27e70, tid=20011, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.233838] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20012][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=584115552256) [2024-09-13 13:02:16.233972] INFO register_pm (ob_page_manager.cpp:40) [20012][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07bb0d4340, pm.get_tid()=20012, tenant_id=500) [2024-09-13 13:02:16.234007] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20012][Occam][T0][Y0-0000000000000000-0-0] [lt=22] thread is running function [2024-09-13 13:02:16.234006] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] init thread success(this=0x2b07b6dff6c0, id=2, ret=0) [2024-09-13 13:02:16.234395] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20013][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=588410519552) [2024-09-13 13:02:16.234515] INFO register_pm (ob_page_manager.cpp:40) [20013][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07bb152340, pm.get_tid()=20013, tenant_id=500) [2024-09-13 13:02:16.234541] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] init thread success(this=0x2b07baf0c030, id=3, ret=0) [2024-09-13 13:02:16.234542] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20013][Occam][T0][Y0-0000000000000000-0-0] [lt=14] thread is running function [2024-09-13 13:02:16.234576] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0x821564e 0xb8e1390 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.234967] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] TimeWheelBase inited success(precision=10000, start_ticket=172620373623, scan_ticket=172620373623) [2024-09-13 13:02:16.234976] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ObTimeWheel init success(precision=10000, real_thread_num=1) [2024-09-13 13:02:16.235114] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20014][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=592705486848) [2024-09-13 13:02:16.235225] INFO register_pm (ob_page_manager.cpp:40) [20014][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07bb1d0340, pm.get_tid()=20014, tenant_id=500) [2024-09-13 13:02:16.235251] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ObTimeWheel start success(timer_name="GEleTimer") [2024-09-13 13:02:16.235260] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] init ObOccamTimer success(ret=0) [2024-09-13 13:02:16.235317] INFO [SERVER.OMT] init (ob_multi_tenant.cpp:572) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] succ to init multi tenant [2024-09-13 13:02:16.235535] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] Succ to register cache(cache_name="opt_table_stat_cache", priority=1, cache_id=14) [2024-09-13 13:02:16.235544] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] Succ to register cache(cache_name="opt_column_stat_cache", priority=1, cache_id=15) [2024-09-13 13:02:16.235548] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Succ to register cache(cache_name="opt_ds_stat_cache", priority=1, cache_id=16) [2024-09-13 13:02:16.235553] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Succ to register cache(cache_name="opt_system_stat_cache", priority=1, cache_id=17) [2024-09-13 13:02:16.235704] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20015][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=597000454144) [2024-09-13 13:02:16.235797] INFO register_pm (ob_page_manager.cpp:40) [20015][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07bb256340, pm.get_tid()=20015, tenant_id=500) [2024-09-13 13:02:16.235825] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [20015][][T0][Y0-0000000000000000-0-0] [lt=19] dedup queue thread start(this=0x55a387480a40) [2024-09-13 13:02:16.235844] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] init dedup-queue:(thread_num=1, queue_size=5, task_map_size=5, total_mem_limit=1073741824, hold_mem_limit=536870912, page_size=65408, this=0x55a387480a40, lbt="0x24edc06b 0x13820f43 0x13820411 0x125e3021 0xb8e178f 0x7ff47de 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:16.257571] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=0] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:16.270779] INFO init (ob_table_store_stat_mgr.cpp:303) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] start tg(lib::TGDefIDs::TableStatRpt=71, tg_name=TableStatRpt) [2024-09-13 13:02:16.271091] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20016][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=601295421440) [2024-09-13 13:02:16.271287] INFO register_pm (ob_page_manager.cpp:40) [20016][][T0][Y0-0000000000000000-0-0] [lt=32] register pm finish(ret=0, &pm=0x2b07bb2d4340, pm.get_tid()=20016, tenant_id=500) [2024-09-13 13:02:16.271371] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=28] ObTimer create success(this=0x2b07968af190, thread_id=20016, lbt()=0x24edc06b 0x13836960 0x115a4182 0x109eab5e 0xb8e18ad 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.271988] INFO run1 (ob_timer.cpp:361) [20016][][T0][Y0-0000000000000000-0-0] [lt=20] timer thread started(this=0x2b07968af190, tid=20016, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.276653] INFO [STORAGE] init (ob_table_store_stat_mgr.cpp:320) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] schedule report task succeed [2024-09-13 13:02:16.281847] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20017][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=605590388736) [2024-09-13 13:02:16.282020] INFO register_pm (ob_page_manager.cpp:40) [20017][][T0][Y0-0000000000000000-0-0] [lt=42] register pm finish(ret=0, &pm=0x2b07bb352340, pm.get_tid()=20017, tenant_id=500) [2024-09-13 13:02:16.282124] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=30] ObTimer create success(this=0x55a38ba42ca0, thread_id=20017, lbt()=0x24edc06b 0x13836960 0x136c6760 0xb8e1908 0x7ff47de 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.282396] INFO [SHARE] init (ob_bg_thread_monitor.cpp:229) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=18] init ObBGThreadMonitor success(ret=0, MONITOR_LIMIT=500) [2024-09-13 13:02:16.282491] INFO run1 (ob_timer.cpp:361) [20017][][T0][Y0-0000000000000000-0-0] [lt=40] timer thread started(this=0x55a38ba42ca0, tid=20017, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.292239] INFO [SHARE] init (ob_resource_plan_manager.cpp:35) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] resource plan manager init ok [2024-09-13 13:02:16.292477] INFO [SHARE] init (ob_resource_mapping_rule_manager.cpp:44) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=18] resource mapping rule manager init ok [2024-09-13 13:02:16.292651] INFO [SQL.ENG] reset (ob_px_target_mgr.cpp:63) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] ObPxTargetMgr reset success(server_="0.0.0.0:0") [2024-09-13 13:02:16.292856] INFO [SQL.ENG] init (ob_px_target_mgr.cpp:50) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] ObPxTargetMgr inited success(server_="172.16.51.35:2882") [2024-09-13 13:02:16.292866] INFO [SERVER] init_px_target_mgr (ob_server.cpp:2706) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] px target mgr init success [2024-09-13 13:02:16.292892] INFO [COMMON] init (ob_kv_storecache.h:427) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] Succ to register cache(cache_name="BACKUP_INDEX_CACHE", priority=1, cache_id=18) [2024-09-13 13:02:16.319124] INFO [SHARE] allocate_ash_buffer (ob_active_sess_hist_list.cpp:119) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] init ASH circular buffer OK(size=59578) [2024-09-13 13:02:16.319153] INFO [SHARE] init (ob_active_sess_hist_list.cpp:62) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=29] ash buffer init OK(ash_buffer={this:0x55a38b35fcc0, block_ptr_.control_ptr:0x2b07b7589c80, block_ptr_.data_ptr:0x2b07b7589ce0}) [2024-09-13 13:02:16.319205] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=7936, isize_=96, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:16.319271] INFO [SHARE] init (ob_server_blacklist.cpp:182) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] ObServerBlacklist init success [2024-09-13 13:02:16.319284] INFO init (ob_server_blacklist.cpp:185) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] start tg(lib::TGDefIDs::Blacklist=17, tg_name=Blacklist) [2024-09-13 13:02:16.319564] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20019][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=609885356032) [2024-09-13 13:02:16.319734] INFO register_pm (ob_page_manager.cpp:40) [20019][][T0][Y0-0000000000000000-0-0] [lt=29] register pm finish(ret=0, &pm=0x2b07bb3d0340, pm.get_tid()=20019, tenant_id=500) [2024-09-13 13:02:16.320245] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [20019][Blacklist][T0][Y0-0000000000000000-0-0] [lt=19] blacklist_loop exec finished(cost_time=29, is_enabled=true, send_cnt=0) [2024-09-13 13:02:16.320302] INFO init (ob_detect_manager.cpp:647) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(lib::TGDefIDs::DetectManager=118, tg_name=DetectManager) [2024-09-13 13:02:16.320525] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20020][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=614180323328) [2024-09-13 13:02:16.320627] INFO register_pm (ob_page_manager.cpp:40) [20020][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07be856340, pm.get_tid()=20020, tenant_id=500) [2024-09-13 13:02:16.320653] INFO [LIB] init (ob_detect_manager.cpp:652) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=18] [DM] ObDetectManagerThread init success(self="172.16.51.35:2882") [2024-09-13 13:02:16.320696] INFO [SERVER] init (ob_server.cpp:541) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] [OBSERVER_NOTICE] success to init observer(cluster_id=1726203323, lib::g_runtime_enabled=true) [2024-09-13 13:02:16.320719] INFO [SERVER] init (ob_server.cpp:544) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] [server_start 4/18] observer init success. [2024-09-13 13:02:16.320729] INFO [SERVER] start (ob_server.cpp:851) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] [OBSERVER_NOTICE] start observer begin [2024-09-13 13:02:16.320746] INFO [SERVER] start (ob_server.cpp:854) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [server_start 5/18] observer start begin. [2024-09-13 13:02:16.320753] INFO [SERVER] start (ob_server.cpp:872) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] [server_start 6/18] observer instance start begin. [2024-09-13 13:02:16.320982] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20021][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=618475290624) [2024-09-13 13:02:16.321095] INFO register_pm (ob_page_manager.cpp:40) [20021][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07be8d4340, pm.get_tid()=20021, tenant_id=500) [2024-09-13 13:02:16.321264] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20022][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=622770257920) [2024-09-13 13:02:16.321345] INFO register_pm (ob_page_manager.cpp:40) [20022][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07be952340, pm.get_tid()=20022, tenant_id=500) [2024-09-13 13:02:16.321369] INFO [SERVER] start_sig_worker_and_handle (ob_server.cpp:840) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] success to start signal worker and handle [2024-09-13 13:02:16.321528] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20023][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=627065225216) [2024-09-13 13:02:16.321623] INFO register_pm (ob_page_manager.cpp:40) [20023][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07be9d0340, pm.get_tid()=20023, tenant_id=500) [2024-09-13 13:02:16.321885] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20024][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=631360192512) [2024-09-13 13:02:16.322006] INFO register_pm (ob_page_manager.cpp:40) [20024][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bea56340, pm.get_tid()=20024, tenant_id=500) [2024-09-13 13:02:16.322201] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20025][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=635655159808) [2024-09-13 13:02:16.322283] INFO register_pm (ob_page_manager.cpp:40) [20025][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bead4340, pm.get_tid()=20025, tenant_id=500) [2024-09-13 13:02:16.322478] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20026][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=639950127104) [2024-09-13 13:02:16.322570] INFO register_pm (ob_page_manager.cpp:40) [20026][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07beb52340, pm.get_tid()=20026, tenant_id=500) [2024-09-13 13:02:16.322721] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20027][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=644245094400) [2024-09-13 13:02:16.322794] INFO register_pm (ob_page_manager.cpp:40) [20027][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bebd0340, pm.get_tid()=20027, tenant_id=500) [2024-09-13 13:02:16.322956] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20028][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=648540061696) [2024-09-13 13:02:16.323024] INFO register_pm (ob_page_manager.cpp:40) [20028][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bec56340, pm.get_tid()=20028, tenant_id=500) [2024-09-13 13:02:16.323173] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20029][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=652835028992) [2024-09-13 13:02:16.323245] INFO register_pm (ob_page_manager.cpp:40) [20029][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07becd4340, pm.get_tid()=20029, tenant_id=500) [2024-09-13 13:02:16.323413] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20030][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=657129996288) [2024-09-13 13:02:16.323489] INFO register_pm (ob_page_manager.cpp:40) [20030][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bed52340, pm.get_tid()=20030, tenant_id=500) [2024-09-13 13:02:16.323525] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] simple thread pool init success(name=SvrStartupHandler, thread_num=8, task_num_limit=128) [2024-09-13 13:02:16.323539] INFO start (ob_server_startup_task_handler.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] start tg(tg_id_=296, tg_name=SvrStartupHandler) [2024-09-13 13:02:16.323553] INFO [SERVER] start (ob_server.cpp:880) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] success to start server startup task handler [2024-09-13 13:02:16.323560] INFO [STORAGE.TRANS] start (ob_gts_rpc.cpp:154) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] gts request rpc start success [2024-09-13 13:02:16.323824] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20031][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=661424963584) [2024-09-13 13:02:16.323951] INFO register_pm (ob_page_manager.cpp:40) [20031][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bedd0340, pm.get_tid()=20031, tenant_id=500) [2024-09-13 13:02:16.323992] INFO [STORAGE.TRANS] start (ob_ts_mgr.cpp:407) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ObTsMgr start success [2024-09-13 13:02:16.324004] INFO [SERVER] start (ob_server.cpp:886) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] success to start ts mgr [2024-09-13 13:02:16.324018] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.324247] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20032][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=665719930880) [2024-09-13 13:02:16.324351] INFO register_pm (ob_page_manager.cpp:40) [20032][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bee56340, pm.get_tid()=20032, tenant_id=500) [2024-09-13 13:02:16.324375] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ob_pthread_create succeed(thread=0x2b07b579be70) [2024-09-13 13:02:16.324385] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ob_pthread_create start [2024-09-13 13:02:16.324608] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20033][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=670014898176) [2024-09-13 13:02:16.324750] INFO register_pm (ob_page_manager.cpp:40) [20033][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07beed4340, pm.get_tid()=20033, tenant_id=500) [2024-09-13 13:02:16.324774] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b579fe70) [2024-09-13 13:02:16.324780] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ob_pthread_create start [2024-09-13 13:02:16.325011] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20034][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=674309865472) [2024-09-13 13:02:16.325126] INFO register_pm (ob_page_manager.cpp:40) [20034][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bef52340, pm.get_tid()=20034, tenant_id=500) [2024-09-13 13:02:16.325148] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create succeed(thread=0x2b07b57a5e70) [2024-09-13 13:02:16.325157] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.325378] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20035][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=678604832768) [2024-09-13 13:02:16.325492] INFO register_pm (ob_page_manager.cpp:40) [20035][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07befd0340, pm.get_tid()=20035, tenant_id=500) [2024-09-13 13:02:16.325524] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ob_pthread_create succeed(thread=0x2b07b57abe70) [2024-09-13 13:02:16.325530] INFO [RPC.FRAME] start (ob_net_easy.cpp:834) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start rpc easy io [2024-09-13 13:02:16.325534] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.325745] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20036][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=682899800064) [2024-09-13 13:02:16.325883] INFO register_pm (ob_page_manager.cpp:40) [20036][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bf256340, pm.get_tid()=20036, tenant_id=500) [2024-09-13 13:02:16.325913] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] ob_pthread_create succeed(thread=0x2b07b57b1e70) [2024-09-13 13:02:16.325921] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ob_pthread_create start [2024-09-13 13:02:16.326121] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20037][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=687194767360) [2024-09-13 13:02:16.326211] INFO register_pm (ob_page_manager.cpp:40) [20037][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bf2d4340, pm.get_tid()=20037, tenant_id=500) [2024-09-13 13:02:16.326234] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=2] ob_pthread_create succeed(thread=0x2b07b57b5e70) [2024-09-13 13:02:16.326242] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ob_pthread_create start [2024-09-13 13:02:16.326453] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20038][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=691489734656) [2024-09-13 13:02:16.326546] INFO register_pm (ob_page_manager.cpp:40) [20038][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bf352340, pm.get_tid()=20038, tenant_id=500) [2024-09-13 13:02:16.326571] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b57bbe70) [2024-09-13 13:02:16.326579] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ob_pthread_create start [2024-09-13 13:02:16.326707] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20039][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=695784701952) [2024-09-13 13:02:16.326779] INFO register_pm (ob_page_manager.cpp:40) [20039][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bf3d0340, pm.get_tid()=20039, tenant_id=500) [2024-09-13 13:02:16.326798] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=2] ob_pthread_create succeed(thread=0x2b07b57c1e70) [2024-09-13 13:02:16.326803] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ob_pthread_create start [2024-09-13 13:02:16.327017] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20040][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=700079669248) [2024-09-13 13:02:16.327110] INFO register_pm (ob_page_manager.cpp:40) [20040][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07bf456340, pm.get_tid()=20040, tenant_id=500) [2024-09-13 13:02:16.327140] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create succeed(thread=0x2b07b57c7e70) [2024-09-13 13:02:16.327149] INFO [RPC.FRAME] start (ob_net_easy.cpp:855) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start batch rpc easy io [2024-09-13 13:02:16.327153] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.327325] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20041][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=704374636544) [2024-09-13 13:02:16.327509] INFO register_pm (ob_page_manager.cpp:40) [20041][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07bf4d4340, pm.get_tid()=20041, tenant_id=500) [2024-09-13 13:02:16.327538] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ob_pthread_create succeed(thread=0x2b07b57cbe70) [2024-09-13 13:02:16.327546] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ob_pthread_create start [2024-09-13 13:02:16.327767] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20042][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=708669603840) [2024-09-13 13:02:16.327887] INFO register_pm (ob_page_manager.cpp:40) [20042][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07bf552340, pm.get_tid()=20042, tenant_id=500) [2024-09-13 13:02:16.327914] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b57d1e70) [2024-09-13 13:02:16.327929] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] ob_pthread_create start [2024-09-13 13:02:16.328122] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20043][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=712964571136) [2024-09-13 13:02:16.328236] INFO register_pm (ob_page_manager.cpp:40) [20043][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bf5d0340, pm.get_tid()=20043, tenant_id=500) [2024-09-13 13:02:16.328267] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b57d7e70) [2024-09-13 13:02:16.328275] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ob_pthread_create start [2024-09-13 13:02:16.328510] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20044][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=717259538432) [2024-09-13 13:02:16.328626] INFO register_pm (ob_page_manager.cpp:40) [20044][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07bf656340, pm.get_tid()=20044, tenant_id=500) [2024-09-13 13:02:16.328650] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b57dde70) [2024-09-13 13:02:16.328655] INFO [RPC.FRAME] start (ob_net_easy.cpp:865) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start mysql easy io [2024-09-13 13:02:16.328660] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ob_pthread_create start [2024-09-13 13:02:16.328868] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20045][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=721554505728) [2024-09-13 13:02:16.328995] INFO register_pm (ob_page_manager.cpp:40) [20045][][T0][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07bf6d4340, pm.get_tid()=20045, tenant_id=500) [2024-09-13 13:02:16.329016] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b57e1e70) [2024-09-13 13:02:16.329021] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] ob_pthread_create start [2024-09-13 13:02:16.329242] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20046][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=725849473024) [2024-09-13 13:02:16.329369] INFO register_pm (ob_page_manager.cpp:40) [20046][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bf752340, pm.get_tid()=20046, tenant_id=500) [2024-09-13 13:02:16.329391] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] ob_pthread_create succeed(thread=0x2b07b57e7e70) [2024-09-13 13:02:16.329399] INFO [RPC.FRAME] start (ob_net_easy.cpp:875) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] start mysql unix easy io [2024-09-13 13:02:16.329406] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] ob_pthread_create start [2024-09-13 13:02:16.329607] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20047][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=730144440320) [2024-09-13 13:02:16.329715] INFO register_pm (ob_page_manager.cpp:40) [20047][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07bf7d0340, pm.get_tid()=20047, tenant_id=500) [2024-09-13 13:02:16.329749] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=2] ob_pthread_create succeed(thread=0x2b07b57ede70) [2024-09-13 13:02:16.329758] INFO ob_pthread_create (ob_pthread.cpp:26) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ob_pthread_create start [2024-09-13 13:02:16.329984] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20048][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=734439407616) [2024-09-13 13:02:16.330110] INFO register_pm (ob_page_manager.cpp:40) [20048][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bf856340, pm.get_tid()=20048, tenant_id=500) [2024-09-13 13:02:16.330138] INFO ob_pthread_create (ob_pthread.cpp:39) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] ob_pthread_create succeed(thread=0x2b07b57f3e70) [2024-09-13 13:02:16.330152] INFO [RPC.FRAME] start (ob_net_easy.cpp:885) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13] start rpc unix easy io [2024-09-13 13:02:16.330360] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20049][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=738734374912) [2024-09-13 13:02:16.330479] INFO register_pm (ob_page_manager.cpp:40) [20049][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bf8d4340, pm.get_tid()=20049, tenant_id=500) [2024-09-13 13:02:16.330726] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20050][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=743029342208) [2024-09-13 13:02:16.330835] INFO register_pm (ob_page_manager.cpp:40) [20050][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bf952340, pm.get_tid()=20050, tenant_id=500) [2024-09-13 13:02:16.330864] INFO [SERVER] start (ob_srv_network_frame.cpp:198) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ObNetKeepAlive start success! [2024-09-13 13:02:16.330929] INFO [RPC.OBMYSQL] init_listen (ob_sql_nio.cpp:824) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] sql_nio init listen succ(port=2881, fd=98) [2024-09-13 13:02:16.330940] INFO [RPC.OBMYSQL] init_listen (ob_sql_nio.cpp:839) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] sql_nio init listen succ(port=2881) [2024-09-13 13:02:16.330963] INFO [RPC.OBMYSQL] init_listen (ob_sql_nio.cpp:824) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] sql_nio init listen succ(port=2881, fd=101) [2024-09-13 13:02:16.330971] INFO [RPC.OBMYSQL] init_listen (ob_sql_nio.cpp:839) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] sql_nio init listen succ(port=2881) [2024-09-13 13:02:16.330992] INFO [RPC.OBMYSQL] init_listen (ob_sql_nio.cpp:824) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] sql_nio init listen succ(port=2881, fd=104) [2024-09-13 13:02:16.331000] INFO [RPC.OBMYSQL] init_listen (ob_sql_nio.cpp:839) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] sql_nio init listen succ(port=2881) [2024-09-13 13:02:16.331232] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20051][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=747324309504) [2024-09-13 13:02:16.331326] INFO register_pm (ob_page_manager.cpp:40) [20051][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bf9d0340, pm.get_tid()=20051, tenant_id=500) [2024-09-13 13:02:16.331556] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20052][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=751619276800) [2024-09-13 13:02:16.331625] INFO register_pm (ob_page_manager.cpp:40) [20052][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bfa56340, pm.get_tid()=20052, tenant_id=500) [2024-09-13 13:02:16.331864] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20053][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=755914244096) [2024-09-13 13:02:16.331975] INFO register_pm (ob_page_manager.cpp:40) [20053][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07bfad4340, pm.get_tid()=20053, tenant_id=500) [2024-09-13 13:02:16.332011] INFO start (ob_ingress_bw_alloc_service.cpp:329) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] start tg(tg_id_=116, tg_name=IngressService) [2024-09-13 13:02:16.332239] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20054][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=760209211392) [2024-09-13 13:02:16.332345] INFO register_pm (ob_page_manager.cpp:40) [20054][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bfb52340, pm.get_tid()=20054, tenant_id=500) [2024-09-13 13:02:16.332422] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObTimer create success(this=0x2b07968faf70, thread_id=20054, lbt()=0x24edc06b 0x13836960 0x115a4182 0xab55448 0xb57b837 0xb8f84fe 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.332449] INFO [SERVER] start (ob_server.cpp:892) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] success to start net frame [2024-09-13 13:02:16.332509] INFO [STORAGE_BLKMGR] start (ob_block_manager.cpp:199) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] [server_start 7/18] block manager start begin. [2024-09-13 13:02:16.332773] INFO run1 (ob_timer.cpp:361) [20054][][T0][Y0-0000000000000000-0-0] [lt=15] timer thread started(this=0x2b07968faf70, tid=20054, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.333233] INFO acceptfd_handle_first_readable_event (handle-event.c:378) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] recv non-negotiation message, the fd will be dispatched, fd:93, src_addr:172.16.51.38:48318, magic:0x78563412 [2024-09-13 13:02:16.333251] INFO dispatch_accept_fd_to_certain_group (group.c:691) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=13] PNIO dispatch fd to oblistener, fd:93 [2024-09-13 13:02:16.333266] INFO [RPC] read_client_magic (ob_listener.cpp:226) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=10] read negotiation msg(rcv_byte=19) [2024-09-13 13:02:16.333286] INFO [RPC] read_client_magic (ob_listener.cpp:246) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=19] read_client_magic, (client_magic=7386785325300370467, index=0) [2024-09-13 13:02:16.333305] INFO [RPC] trace_connection_info (ob_listener.cpp:290) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=13] oblistener receive connection from(peer="172.16.51.38:48318") [2024-09-13 13:02:16.333313] INFO [RPC] do_one_event (ob_listener.cpp:421) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] dispatch to(client_magic=7386785325300370467, index=0) [2024-09-13 13:02:16.333324] INFO [RPC] connection_redispatch (ob_listener.cpp:268) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=9] dipatch(conn_fd=93, count=1, index=0, wrfd=58) [2024-09-13 13:02:16.333336] INFO [RPC] connection_redispatch (ob_listener.cpp:274) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] dispatch success!(conn_fd=93, wrfd=58) [2024-09-13 13:02:16.333394] INFO [RPC.OBRPC] do_server_loop (ob_net_keepalive.cpp:461) [20049][KeepAliveServer][T0][Y0-0000000000000000-0-0] [lt=18] new connection established, fd: 93, addr: "172.16.51.38:48318" [2024-09-13 13:02:16.342118] INFO [STORAGE] format_startup_super_block (ob_super_block_struct.cpp:238) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] success to format super block(*this={header:{version:1, magic:1018, body_size:81, body_crc:1399495964}, body:{Type:"ObServerSuperBlockBody", create_timestamp:1726203736342101, modify_timestamp:1726203736342101, macro_block_size:2097152, total_macro_block_count:10240, total_file_size:21474836480, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, tenant_meta_entry:[-1](ver=0,mode=0,seq=0)}}) [2024-09-13 13:02:16.342234] INFO [STORAGE] serialize (ob_super_block_struct.cpp:148) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=39] succeed to serialize super block buf(buf_size=65536, pos=97, *this={header:{version:1, magic:1018, body_size:81, body_crc:1399495964}, body:{Type:"ObServerSuperBlockBody", create_timestamp:1726203736342101, modify_timestamp:1726203736342101, macro_block_size:2097152, total_macro_block_count:10240, total_file_size:21474836480, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, tenant_meta_entry:[-1](ver=0,mode=0,seq=0)}}) [2024-09-13 13:02:16.347677] INFO [STORAGE.BLKMGR] write_super_block (ob_block_manager.cpp:447) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] succeed to write super block(ret=0, super_block={header:{version:1, magic:1018, body_size:81, body_crc:1399495964}, body:{Type:"ObServerSuperBlockBody", create_timestamp:1726203736342101, modify_timestamp:1726203736342101, macro_block_size:2097152, total_macro_block_count:10240, total_file_size:21474836480, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, tenant_meta_entry:[-1](ver=0,mode=0,seq=0)}}) [2024-09-13 13:02:16.347696] INFO [STORAGE.BLKMGR] start (ob_block_manager.cpp:223) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] succeed to format super block, (super_block_={header:{version:1, magic:1018, body_size:81, body_crc:1399495964}, body:{Type:"ObServerSuperBlockBody", create_timestamp:1726203736342101, modify_timestamp:1726203736342101, macro_block_size:2097152, total_macro_block_count:10240, total_file_size:21474836480, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, tenant_meta_entry:[-1](ver=0,mode=0,seq=0)}}) [2024-09-13 13:02:16.347720] INFO [STORAGE.BLKMGR] start (ob_block_manager.cpp:245) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start block manager(need_format=true) [2024-09-13 13:02:16.347742] INFO [STORAGE_BLKMGR] start (ob_block_manager.cpp:256) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] [server_start 8/18] block manager start success. [2024-09-13 13:02:16.347751] INFO [SERVER] start (ob_server.cpp:900) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] success to start block manager [2024-09-13 13:02:16.347763] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=256, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.348026] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20055][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=764504178688) [2024-09-13 13:02:16.348199] INFO register_pm (ob_page_manager.cpp:40) [20055][][T0][Y0-0000000000000000-0-0] [lt=26] register pm finish(ret=0, &pm=0x2b07bfbd0340, pm.get_tid()=20055, tenant_id=500) [2024-09-13 13:02:16.348243] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] start tg(tg_id_=257, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.348252] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20055][IO_SCHEDULE1][T0][Y0-0000000000000000-0-0] [lt=22] io schedule thread started(thread_id=1) [2024-09-13 13:02:16.348503] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20056][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=768799145984) [2024-09-13 13:02:16.348628] INFO register_pm (ob_page_manager.cpp:40) [20056][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07bfc56340, pm.get_tid()=20056, tenant_id=500) [2024-09-13 13:02:16.348654] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=258, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.348654] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20056][IO_SCHEDULE2][T0][Y0-0000000000000000-0-0] [lt=19] io schedule thread started(thread_id=2) [2024-09-13 13:02:16.348897] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20057][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=773094113280) [2024-09-13 13:02:16.349042] INFO register_pm (ob_page_manager.cpp:40) [20057][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07bfcd4340, pm.get_tid()=20057, tenant_id=500) [2024-09-13 13:02:16.349067] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=259, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.349067] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20057][IO_SCHEDULE3][T0][Y0-0000000000000000-0-0] [lt=19] io schedule thread started(thread_id=3) [2024-09-13 13:02:16.349289] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20058][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=777389080576) [2024-09-13 13:02:16.349390] INFO register_pm (ob_page_manager.cpp:40) [20058][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bfd52340, pm.get_tid()=20058, tenant_id=500) [2024-09-13 13:02:16.349414] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=260, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.349414] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20058][IO_SCHEDULE4][T0][Y0-0000000000000000-0-0] [lt=17] io schedule thread started(thread_id=4) [2024-09-13 13:02:16.349596] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20059][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=781684047872) [2024-09-13 13:02:16.349694] INFO register_pm (ob_page_manager.cpp:40) [20059][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bfdd0340, pm.get_tid()=20059, tenant_id=500) [2024-09-13 13:02:16.349718] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=261, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.349719] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20059][IO_SCHEDULE5][T0][Y0-0000000000000000-0-0] [lt=15] io schedule thread started(thread_id=5) [2024-09-13 13:02:16.349936] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20060][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=785979015168) [2024-09-13 13:02:16.350043] INFO register_pm (ob_page_manager.cpp:40) [20060][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07bfe56340, pm.get_tid()=20060, tenant_id=500) [2024-09-13 13:02:16.350074] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=262, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.350075] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20060][IO_SCHEDULE6][T0][Y0-0000000000000000-0-0] [lt=18] io schedule thread started(thread_id=6) [2024-09-13 13:02:16.350280] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20061][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=790273982464) [2024-09-13 13:02:16.350379] INFO register_pm (ob_page_manager.cpp:40) [20061][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07bfed4340, pm.get_tid()=20061, tenant_id=500) [2024-09-13 13:02:16.350403] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=263, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.350404] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20061][IO_SCHEDULE7][T0][Y0-0000000000000000-0-0] [lt=15] io schedule thread started(thread_id=7) [2024-09-13 13:02:16.350593] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20062][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=794568949760) [2024-09-13 13:02:16.350707] INFO register_pm (ob_page_manager.cpp:40) [20062][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bff52340, pm.get_tid()=20062, tenant_id=500) [2024-09-13 13:02:16.350737] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=264, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.350737] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20062][IO_SCHEDULE8][T0][Y0-0000000000000000-0-0] [lt=19] io schedule thread started(thread_id=8) [2024-09-13 13:02:16.350946] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20063][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=798863917056) [2024-09-13 13:02:16.351055] INFO register_pm (ob_page_manager.cpp:40) [20063][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07bffd0340, pm.get_tid()=20063, tenant_id=500) [2024-09-13 13:02:16.351101] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=265, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.351102] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20063][IO_SCHEDULE9][T0][Y0-0000000000000000-0-0] [lt=40] io schedule thread started(thread_id=9) [2024-09-13 13:02:16.351333] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20064][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=803158884352) [2024-09-13 13:02:16.351456] INFO register_pm (ob_page_manager.cpp:40) [20064][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c0056340, pm.get_tid()=20064, tenant_id=500) [2024-09-13 13:02:16.351488] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20064][IO_SCHEDULE10][T0][Y0-0000000000000000-0-0] [lt=18] io schedule thread started(thread_id=10) [2024-09-13 13:02:16.351486] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=266, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.351698] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20065][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=807453851648) [2024-09-13 13:02:16.351810] INFO register_pm (ob_page_manager.cpp:40) [20065][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c00d4340, pm.get_tid()=20065, tenant_id=500) [2024-09-13 13:02:16.351842] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20065][IO_SCHEDULE11][T0][Y0-0000000000000000-0-0] [lt=18] io schedule thread started(thread_id=11) [2024-09-13 13:02:16.351842] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] start tg(tg_id_=267, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.352025] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20066][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=811748818944) [2024-09-13 13:02:16.352144] INFO register_pm (ob_page_manager.cpp:40) [20066][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c0152340, pm.get_tid()=20066, tenant_id=500) [2024-09-13 13:02:16.352180] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=268, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.352181] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20066][IO_SCHEDULE12][T0][Y0-0000000000000000-0-0] [lt=18] io schedule thread started(thread_id=12) [2024-09-13 13:02:16.352372] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20067][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=816043786240) [2024-09-13 13:02:16.352474] INFO register_pm (ob_page_manager.cpp:40) [20067][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c01d0340, pm.get_tid()=20067, tenant_id=500) [2024-09-13 13:02:16.352513] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=269, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.352512] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20067][IO_SCHEDULE13][T0][Y0-0000000000000000-0-0] [lt=19] io schedule thread started(thread_id=13) [2024-09-13 13:02:16.352692] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20068][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=820338753536) [2024-09-13 13:02:16.352820] INFO register_pm (ob_page_manager.cpp:40) [20068][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07c0256340, pm.get_tid()=20068, tenant_id=500) [2024-09-13 13:02:16.352843] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=270, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.352843] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20068][IO_SCHEDULE14][T0][Y0-0000000000000000-0-0] [lt=18] io schedule thread started(thread_id=14) [2024-09-13 13:02:16.352992] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20069][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=824633720832) [2024-09-13 13:02:16.353074] INFO register_pm (ob_page_manager.cpp:40) [20069][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c02d4340, pm.get_tid()=20069, tenant_id=500) [2024-09-13 13:02:16.353099] INFO start (ob_io_struct.cpp:1095) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] start tg(tg_id_=271, tg_name=IO_SCHEDULE) [2024-09-13 13:02:16.353099] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20069][IO_SCHEDULE15][T0][Y0-0000000000000000-0-0] [lt=19] io schedule thread started(thread_id=15) [2024-09-13 13:02:16.353289] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20070][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=828928688128) [2024-09-13 13:02:16.353404] INFO register_pm (ob_page_manager.cpp:40) [20070][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c0352340, pm.get_tid()=20070, tenant_id=500) [2024-09-13 13:02:16.353789] INFO [COMMON] run1 (ob_io_struct.cpp:1115) [20070][IO_SCHEDULE16][T0][Y0-0000000000000000-0-0] [lt=15] io schedule thread started(thread_id=16) [2024-09-13 13:02:16.354020] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20071][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=833223655424) [2024-09-13 13:02:16.354146] INFO register_pm (ob_page_manager.cpp:40) [20071][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c03d0340, pm.get_tid()=20071, tenant_id=500) [2024-09-13 13:02:16.354176] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] simple thread pool init success(name=IO_HEALTH, thread_num=1, task_num_limit=100) [2024-09-13 13:02:16.354187] INFO start (ob_io_struct.cpp:2954) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(TGDefIDs::IO_HEALTH=124, tg_name=IO_HEALTH) [2024-09-13 13:02:16.354198] INFO [SERVER] start (ob_server.cpp:906) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] success to start io manager [2024-09-13 13:02:16.354606] INFO [SERVER.OMT] update_tenant_memory (ob_multi_tenant.cpp:1242) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] reduce memory quota(mem_limit=1073741824, pre_mem_limit=9223372036854775807, target_mem_limit=1073741824, mem_hold=0) [2024-09-13 13:02:16.354983] INFO [SERVER.OMT] add_tenant_config (ob_tenant_config_mgr.cpp:309) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] tenant config added(tenant_id=508, ret=0) [2024-09-13 13:02:16.357292] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=508, ret=-4201) [2024-09-13 13:02:16.357321] INFO [SERVER.OMT] construct_mtl_init_ctx (ob_tenant.cpp:887) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=21] construct_mtl_init_ctx success(palf_options={log_disk_size(MB):2048, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}) [2024-09-13 13:02:16.357596] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20072][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=837518622720) [2024-09-13 13:02:16.357694] INFO register_pm (ob_page_manager.cpp:40) [20072][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07c1056340, pm.get_tid()=20072, tenant_id=500) [2024-09-13 13:02:16.357728] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20072][][T508][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=1) [2024-09-13 13:02:16.357747] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20072][][T508][Y0-0000000000000000-0-0] [lt=12] Init thread local success [2024-09-13 13:02:16.357767] INFO unregister_pm (ob_page_manager.cpp:50) [20072][][T508][Y0-0000000000000000-0-0] [lt=9] unregister pm finish(&pm=0x2b07c1056340, pm.get_tid()=20072) [2024-09-13 13:02:16.357781] INFO register_pm (ob_page_manager.cpp:40) [20072][][T508][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1056340, pm.get_tid()=20072, tenant_id=508) [2024-09-13 13:02:16.357948] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20073][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=841813590016) [2024-09-13 13:02:16.358020] INFO register_pm (ob_page_manager.cpp:40) [20073][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c10d4340, pm.get_tid()=20073, tenant_id=500) [2024-09-13 13:02:16.358056] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20073][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=2) [2024-09-13 13:02:16.358067] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20073][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.358077] INFO unregister_pm (ob_page_manager.cpp:50) [20073][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c10d4340, pm.get_tid()=20073) [2024-09-13 13:02:16.358094] INFO register_pm (ob_page_manager.cpp:40) [20073][][T508][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c10d4340, pm.get_tid()=20073, tenant_id=508) [2024-09-13 13:02:16.358218] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20074][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=846108557312) [2024-09-13 13:02:16.358337] INFO register_pm (ob_page_manager.cpp:40) [20074][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c1152340, pm.get_tid()=20074, tenant_id=500) [2024-09-13 13:02:16.358368] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20074][][T508][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=3) [2024-09-13 13:02:16.358379] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20074][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.358389] INFO unregister_pm (ob_page_manager.cpp:50) [20074][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c1152340, pm.get_tid()=20074) [2024-09-13 13:02:16.358405] INFO register_pm (ob_page_manager.cpp:40) [20074][][T508][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07c1152340, pm.get_tid()=20074, tenant_id=508) [2024-09-13 13:02:16.358546] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20075][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=850403524608) [2024-09-13 13:02:16.358656] INFO register_pm (ob_page_manager.cpp:40) [20075][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c11d0340, pm.get_tid()=20075, tenant_id=500) [2024-09-13 13:02:16.358676] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20075][][T508][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=4) [2024-09-13 13:02:16.358687] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20075][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.358696] INFO unregister_pm (ob_page_manager.cpp:50) [20075][][T508][Y0-0000000000000000-0-0] [lt=9] unregister pm finish(&pm=0x2b07c11d0340, pm.get_tid()=20075) [2024-09-13 13:02:16.358728] INFO register_pm (ob_page_manager.cpp:40) [20075][][T508][Y0-0000000000000000-0-0] [lt=31] register pm finish(ret=0, &pm=0x2b07c11d0340, pm.get_tid()=20075, tenant_id=508) [2024-09-13 13:02:16.358906] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20076][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=854698491904) [2024-09-13 13:02:16.359044] INFO register_pm (ob_page_manager.cpp:40) [20076][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1256340, pm.get_tid()=20076, tenant_id=500) [2024-09-13 13:02:16.359073] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20076][][T508][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=5) [2024-09-13 13:02:16.359084] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20076][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.359092] INFO unregister_pm (ob_page_manager.cpp:50) [20076][][T508][Y0-0000000000000000-0-0] [lt=7] unregister pm finish(&pm=0x2b07c1256340, pm.get_tid()=20076) [2024-09-13 13:02:16.359115] INFO register_pm (ob_page_manager.cpp:40) [20076][][T508][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07c1256340, pm.get_tid()=20076, tenant_id=508) [2024-09-13 13:02:16.359235] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20077][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=858993459200) [2024-09-13 13:02:16.359359] INFO register_pm (ob_page_manager.cpp:40) [20077][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c12d4340, pm.get_tid()=20077, tenant_id=500) [2024-09-13 13:02:16.359390] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20077][][T508][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=6) [2024-09-13 13:02:16.359401] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20077][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.359407] INFO unregister_pm (ob_page_manager.cpp:50) [20077][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c12d4340, pm.get_tid()=20077) [2024-09-13 13:02:16.359424] INFO register_pm (ob_page_manager.cpp:40) [20077][][T508][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c12d4340, pm.get_tid()=20077, tenant_id=508) [2024-09-13 13:02:16.359549] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20078][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=863288426496) [2024-09-13 13:02:16.359667] INFO register_pm (ob_page_manager.cpp:40) [20078][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1352340, pm.get_tid()=20078, tenant_id=500) [2024-09-13 13:02:16.359698] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20078][][T508][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=7) [2024-09-13 13:02:16.359707] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20078][][T508][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:16.359713] INFO unregister_pm (ob_page_manager.cpp:50) [20078][][T508][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07c1352340, pm.get_tid()=20078) [2024-09-13 13:02:16.359739] INFO register_pm (ob_page_manager.cpp:40) [20078][][T508][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07c1352340, pm.get_tid()=20078, tenant_id=508) [2024-09-13 13:02:16.359948] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20079][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=867583393792) [2024-09-13 13:02:16.360037] INFO register_pm (ob_page_manager.cpp:40) [20079][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c13d0340, pm.get_tid()=20079, tenant_id=500) [2024-09-13 13:02:16.360063] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20079][][T508][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=8) [2024-09-13 13:02:16.360090] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20079][][T508][Y0-0000000000000000-0-0] [lt=26] Init thread local success [2024-09-13 13:02:16.360096] INFO unregister_pm (ob_page_manager.cpp:50) [20079][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c13d0340, pm.get_tid()=20079) [2024-09-13 13:02:16.360108] INFO register_pm (ob_page_manager.cpp:40) [20079][][T508][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07c13d0340, pm.get_tid()=20079, tenant_id=508) [2024-09-13 13:02:16.360309] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20080][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=871878361088) [2024-09-13 13:02:16.360380] INFO register_pm (ob_page_manager.cpp:40) [20080][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1456340, pm.get_tid()=20080, tenant_id=500) [2024-09-13 13:02:16.360400] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20080][][T508][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=9) [2024-09-13 13:02:16.360407] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20080][][T508][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:16.360414] INFO unregister_pm (ob_page_manager.cpp:50) [20080][][T508][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07c1456340, pm.get_tid()=20080) [2024-09-13 13:02:16.360428] INFO register_pm (ob_page_manager.cpp:40) [20080][][T508][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1456340, pm.get_tid()=20080, tenant_id=508) [2024-09-13 13:02:16.360602] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20081][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=876173328384) [2024-09-13 13:02:16.360687] INFO register_pm (ob_page_manager.cpp:40) [20081][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c14d4340, pm.get_tid()=20081, tenant_id=500) [2024-09-13 13:02:16.360708] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20081][][T508][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=10) [2024-09-13 13:02:16.360719] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20081][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.360716] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=27][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=508, ret=-4201) [2024-09-13 13:02:16.360730] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=508, ret=-4201) [2024-09-13 13:02:16.360726] INFO unregister_pm (ob_page_manager.cpp:50) [20081][][T508][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07c14d4340, pm.get_tid()=20081) [2024-09-13 13:02:16.360736] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=508, ret=-4201) [2024-09-13 13:02:16.360737] INFO register_pm (ob_page_manager.cpp:40) [20081][][T508][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07c14d4340, pm.get_tid()=20081, tenant_id=508) [2024-09-13 13:02:16.360740] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=508, ret=-4201) [2024-09-13 13:02:16.360922] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20082][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=880468295680) [2024-09-13 13:02:16.360991] INFO register_pm (ob_page_manager.cpp:40) [20082][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1552340, pm.get_tid()=20082, tenant_id=500) [2024-09-13 13:02:16.361010] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20082][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=11) [2024-09-13 13:02:16.361017] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20082][][T508][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:16.361027] INFO unregister_pm (ob_page_manager.cpp:50) [20082][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c1552340, pm.get_tid()=20082) [2024-09-13 13:02:16.361040] INFO register_pm (ob_page_manager.cpp:40) [20082][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1552340, pm.get_tid()=20082, tenant_id=508) [2024-09-13 13:02:16.361208] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20083][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=884763262976) [2024-09-13 13:02:16.361288] INFO register_pm (ob_page_manager.cpp:40) [20083][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c15d0340, pm.get_tid()=20083, tenant_id=500) [2024-09-13 13:02:16.361311] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20083][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=12) [2024-09-13 13:02:16.361322] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20083][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.361328] INFO unregister_pm (ob_page_manager.cpp:50) [20083][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c15d0340, pm.get_tid()=20083) [2024-09-13 13:02:16.361351] INFO register_pm (ob_page_manager.cpp:40) [20083][][T508][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07c15d0340, pm.get_tid()=20083, tenant_id=508) [2024-09-13 13:02:16.361458] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20084][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=889058230272) [2024-09-13 13:02:16.361529] INFO register_pm (ob_page_manager.cpp:40) [20084][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1656340, pm.get_tid()=20084, tenant_id=500) [2024-09-13 13:02:16.361556] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20084][][T508][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=13) [2024-09-13 13:02:16.361567] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20084][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.361573] INFO unregister_pm (ob_page_manager.cpp:50) [20084][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c1656340, pm.get_tid()=20084) [2024-09-13 13:02:16.361586] INFO register_pm (ob_page_manager.cpp:40) [20084][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1656340, pm.get_tid()=20084, tenant_id=508) [2024-09-13 13:02:16.361730] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20085][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=893353197568) [2024-09-13 13:02:16.361826] INFO register_pm (ob_page_manager.cpp:40) [20085][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c16d4340, pm.get_tid()=20085, tenant_id=500) [2024-09-13 13:02:16.361867] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20085][][T508][Y0-0000000000000000-0-0] [lt=25] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=14) [2024-09-13 13:02:16.361891] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20085][][T508][Y0-0000000000000000-0-0] [lt=23] Init thread local success [2024-09-13 13:02:16.361897] INFO unregister_pm (ob_page_manager.cpp:50) [20085][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c16d4340, pm.get_tid()=20085) [2024-09-13 13:02:16.361913] INFO register_pm (ob_page_manager.cpp:40) [20085][][T508][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07c16d4340, pm.get_tid()=20085, tenant_id=508) [2024-09-13 13:02:16.362065] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20086][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=897648164864) [2024-09-13 13:02:16.362145] INFO register_pm (ob_page_manager.cpp:40) [20086][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1752340, pm.get_tid()=20086, tenant_id=500) [2024-09-13 13:02:16.362168] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20086][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=15) [2024-09-13 13:02:16.362179] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20086][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.362188] INFO unregister_pm (ob_page_manager.cpp:50) [20086][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c1752340, pm.get_tid()=20086) [2024-09-13 13:02:16.362202] INFO register_pm (ob_page_manager.cpp:40) [20086][][T508][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1752340, pm.get_tid()=20086, tenant_id=508) [2024-09-13 13:02:16.362335] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20087][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=901943132160) [2024-09-13 13:02:16.362413] INFO register_pm (ob_page_manager.cpp:40) [20087][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c17d0340, pm.get_tid()=20087, tenant_id=500) [2024-09-13 13:02:16.362432] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20087][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=16) [2024-09-13 13:02:16.362455] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20087][][T508][Y0-0000000000000000-0-0] [lt=22] Init thread local success [2024-09-13 13:02:16.362465] INFO unregister_pm (ob_page_manager.cpp:50) [20087][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c17d0340, pm.get_tid()=20087) [2024-09-13 13:02:16.362478] INFO register_pm (ob_page_manager.cpp:40) [20087][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c17d0340, pm.get_tid()=20087, tenant_id=508) [2024-09-13 13:02:16.362643] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20088][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=906238099456) [2024-09-13 13:02:16.362760] INFO register_pm (ob_page_manager.cpp:40) [20088][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1856340, pm.get_tid()=20088, tenant_id=500) [2024-09-13 13:02:16.362790] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20088][][T508][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=17) [2024-09-13 13:02:16.362800] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20088][][T508][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:16.362807] INFO unregister_pm (ob_page_manager.cpp:50) [20088][][T508][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07c1856340, pm.get_tid()=20088) [2024-09-13 13:02:16.362825] INFO register_pm (ob_page_manager.cpp:40) [20088][][T508][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07c1856340, pm.get_tid()=20088, tenant_id=508) [2024-09-13 13:02:16.363023] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20089][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=910533066752) [2024-09-13 13:02:16.363127] INFO register_pm (ob_page_manager.cpp:40) [20089][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c18d4340, pm.get_tid()=20089, tenant_id=500) [2024-09-13 13:02:16.363153] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20089][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=18) [2024-09-13 13:02:16.363163] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20089][][T508][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:16.363173] INFO unregister_pm (ob_page_manager.cpp:50) [20089][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c18d4340, pm.get_tid()=20089) [2024-09-13 13:02:16.363186] INFO register_pm (ob_page_manager.cpp:40) [20089][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c18d4340, pm.get_tid()=20089, tenant_id=508) [2024-09-13 13:02:16.363284] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20090][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=914828034048) [2024-09-13 13:02:16.363356] INFO register_pm (ob_page_manager.cpp:40) [20090][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1952340, pm.get_tid()=20090, tenant_id=500) [2024-09-13 13:02:16.363379] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20090][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=19) [2024-09-13 13:02:16.363390] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20090][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.363399] INFO unregister_pm (ob_page_manager.cpp:50) [20090][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c1952340, pm.get_tid()=20090) [2024-09-13 13:02:16.363412] INFO register_pm (ob_page_manager.cpp:40) [20090][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1952340, pm.get_tid()=20090, tenant_id=508) [2024-09-13 13:02:16.363543] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20091][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=919123001344) [2024-09-13 13:02:16.363633] INFO register_pm (ob_page_manager.cpp:40) [20091][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c19d0340, pm.get_tid()=20091, tenant_id=500) [2024-09-13 13:02:16.363667] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20091][][T508][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=20) [2024-09-13 13:02:16.363677] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20091][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.363687] INFO unregister_pm (ob_page_manager.cpp:50) [20091][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c19d0340, pm.get_tid()=20091) [2024-09-13 13:02:16.363700] INFO register_pm (ob_page_manager.cpp:40) [20091][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c19d0340, pm.get_tid()=20091, tenant_id=508) [2024-09-13 13:02:16.363827] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20092][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=923417968640) [2024-09-13 13:02:16.363945] INFO register_pm (ob_page_manager.cpp:40) [20092][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1a56340, pm.get_tid()=20092, tenant_id=500) [2024-09-13 13:02:16.363964] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20092][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=21) [2024-09-13 13:02:16.363971] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20092][][T508][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:16.363980] INFO unregister_pm (ob_page_manager.cpp:50) [20092][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c1a56340, pm.get_tid()=20092) [2024-09-13 13:02:16.364006] INFO register_pm (ob_page_manager.cpp:40) [20092][][T508][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07c1a56340, pm.get_tid()=20092, tenant_id=508) [2024-09-13 13:02:16.364160] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20093][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=927712935936) [2024-09-13 13:02:16.364245] INFO register_pm (ob_page_manager.cpp:40) [20093][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1ad4340, pm.get_tid()=20093, tenant_id=500) [2024-09-13 13:02:16.364264] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20093][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=22) [2024-09-13 13:02:16.364270] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20093][][T508][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:16.364280] INFO unregister_pm (ob_page_manager.cpp:50) [20093][][T508][Y0-0000000000000000-0-0] [lt=9] unregister pm finish(&pm=0x2b07c1ad4340, pm.get_tid()=20093) [2024-09-13 13:02:16.364295] INFO register_pm (ob_page_manager.cpp:40) [20093][][T508][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07c1ad4340, pm.get_tid()=20093, tenant_id=508) [2024-09-13 13:02:16.364389] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20094][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=932007903232) [2024-09-13 13:02:16.364468] INFO register_pm (ob_page_manager.cpp:40) [20094][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1b52340, pm.get_tid()=20094, tenant_id=500) [2024-09-13 13:02:16.364502] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20094][][T508][Y0-0000000000000000-0-0] [lt=30] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=23) [2024-09-13 13:02:16.364511] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20094][][T508][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:16.364520] INFO unregister_pm (ob_page_manager.cpp:50) [20094][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c1b52340, pm.get_tid()=20094) [2024-09-13 13:02:16.364536] INFO register_pm (ob_page_manager.cpp:40) [20094][][T508][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07c1b52340, pm.get_tid()=20094, tenant_id=508) [2024-09-13 13:02:16.364646] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20095][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=936302870528) [2024-09-13 13:02:16.364778] INFO register_pm (ob_page_manager.cpp:40) [20095][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1bd0340, pm.get_tid()=20095, tenant_id=500) [2024-09-13 13:02:16.364807] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20095][][T508][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=24) [2024-09-13 13:02:16.364818] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20095][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.364826] INFO unregister_pm (ob_page_manager.cpp:50) [20095][][T508][Y0-0000000000000000-0-0] [lt=7] unregister pm finish(&pm=0x2b07c1bd0340, pm.get_tid()=20095) [2024-09-13 13:02:16.364842] INFO register_pm (ob_page_manager.cpp:40) [20095][][T508][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c1bd0340, pm.get_tid()=20095, tenant_id=508) [2024-09-13 13:02:16.364951] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20096][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=940597837824) [2024-09-13 13:02:16.365028] INFO register_pm (ob_page_manager.cpp:40) [20096][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1e56340, pm.get_tid()=20096, tenant_id=500) [2024-09-13 13:02:16.365050] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20096][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=25) [2024-09-13 13:02:16.365061] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20096][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.365068] INFO unregister_pm (ob_page_manager.cpp:50) [20096][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c1e56340, pm.get_tid()=20096) [2024-09-13 13:02:16.365080] INFO register_pm (ob_page_manager.cpp:40) [20096][][T508][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1e56340, pm.get_tid()=20096, tenant_id=508) [2024-09-13 13:02:16.365174] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20097][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=944892805120) [2024-09-13 13:02:16.365249] INFO register_pm (ob_page_manager.cpp:40) [20097][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1ed4340, pm.get_tid()=20097, tenant_id=500) [2024-09-13 13:02:16.365274] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20097][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=26) [2024-09-13 13:02:16.365285] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20097][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.365291] INFO unregister_pm (ob_page_manager.cpp:50) [20097][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c1ed4340, pm.get_tid()=20097) [2024-09-13 13:02:16.365303] INFO register_pm (ob_page_manager.cpp:40) [20097][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1ed4340, pm.get_tid()=20097, tenant_id=508) [2024-09-13 13:02:16.365410] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20098][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=949187772416) [2024-09-13 13:02:16.365511] INFO register_pm (ob_page_manager.cpp:40) [20098][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c1f52340, pm.get_tid()=20098, tenant_id=500) [2024-09-13 13:02:16.365536] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20098][][T508][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=27) [2024-09-13 13:02:16.365543] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20098][][T508][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:16.365551] INFO unregister_pm (ob_page_manager.cpp:50) [20098][][T508][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07c1f52340, pm.get_tid()=20098) [2024-09-13 13:02:16.365565] INFO register_pm (ob_page_manager.cpp:40) [20098][][T508][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c1f52340, pm.get_tid()=20098, tenant_id=508) [2024-09-13 13:02:16.365719] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20099][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=953482739712) [2024-09-13 13:02:16.365812] INFO register_pm (ob_page_manager.cpp:40) [20099][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c1fd0340, pm.get_tid()=20099, tenant_id=500) [2024-09-13 13:02:16.365834] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20099][][T508][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=28) [2024-09-13 13:02:16.365841] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20099][][T508][Y0-0000000000000000-0-0] [lt=6] Init thread local success [2024-09-13 13:02:16.365848] INFO unregister_pm (ob_page_manager.cpp:50) [20099][][T508][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07c1fd0340, pm.get_tid()=20099) [2024-09-13 13:02:16.365873] INFO register_pm (ob_page_manager.cpp:40) [20099][][T508][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07c1fd0340, pm.get_tid()=20099, tenant_id=508) [2024-09-13 13:02:16.366002] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20100][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=957777707008) [2024-09-13 13:02:16.366111] INFO register_pm (ob_page_manager.cpp:40) [20100][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c2056340, pm.get_tid()=20100, tenant_id=500) [2024-09-13 13:02:16.366138] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20100][][T508][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=29) [2024-09-13 13:02:16.366148] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20100][][T508][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:16.366154] INFO unregister_pm (ob_page_manager.cpp:50) [20100][][T508][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07c2056340, pm.get_tid()=20100) [2024-09-13 13:02:16.366180] INFO register_pm (ob_page_manager.cpp:40) [20100][][T508][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07c2056340, pm.get_tid()=20100, tenant_id=508) [2024-09-13 13:02:16.366322] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20101][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=962072674304) [2024-09-13 13:02:16.366410] INFO register_pm (ob_page_manager.cpp:40) [20101][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c20d4340, pm.get_tid()=20101, tenant_id=500) [2024-09-13 13:02:16.366443] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20101][][T508][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=508, ret=0, thread_count_=30) [2024-09-13 13:02:16.366454] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20101][][T508][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:16.366447] INFO [SERVER.OMT] check_worker_count (ob_tenant.cpp:1743) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] worker thread created(id_=508, token=22) [2024-09-13 13:02:16.366464] INFO unregister_pm (ob_page_manager.cpp:50) [20101][][T508][Y0-0000000000000000-0-0] [lt=8] unregister pm finish(&pm=0x2b07c20d4340, pm.get_tid()=20101) [2024-09-13 13:02:16.366479] INFO register_pm (ob_page_manager.cpp:40) [20101][][T508][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07c20d4340, pm.get_tid()=20101, tenant_id=508) [2024-09-13 13:02:16.366488] INFO [SERVER.OMT] set_create_status (ob_tenant.cpp:910) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set create status(tenant_id=508, unit_id=1000, new_status=1, old_status=0, tenant_meta={unit:{tenant_id:508, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:5, max_cpu:5, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1726203736354211, is_removed:false}, super_block:{tenant_id:508, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:0}) [2024-09-13 13:02:16.366542] INFO [SERVER.OMT] create_tenant (ob_multi_tenant.cpp:1086) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=38] finish create new tenant(ret=0, tenant_id=508, write_slog=false, create_step=5, bucket_lock_idx=8326) [2024-09-13 13:02:16.366565] INFO [COMMON] set_tenant_mem_limit (ob_tenant_mgr.cpp:272) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] set tenant mem limit(tenant id=508, mem_lower_limit=0, mem_upper_limit=1073741824, mem_tenant_limit=1073741824, mem_tenant_hold=23621632, kv_cache_mem=0) [2024-09-13 13:02:16.366585] INFO [COMMON] set_tenant_mem_limit (ob_tenant_mgr.cpp:272) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] set tenant mem limit(tenant id=500, mem_lower_limit=0, mem_upper_limit=9223372036854775807, mem_tenant_limit=9223372036854775807, mem_tenant_hold=494653440, kv_cache_mem=0) [2024-09-13 13:02:16.366779] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20102][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=966367641600) [2024-09-13 13:02:16.366928] INFO register_pm (ob_page_manager.cpp:40) [20102][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07c2256340, pm.get_tid()=20102, tenant_id=500) [2024-09-13 13:02:16.366975] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.366992] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.367000] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.367152] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20103][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=970662608896) [2024-09-13 13:02:16.367246] INFO register_pm (ob_page_manager.cpp:40) [20103][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07c22d4340, pm.get_tid()=20103, tenant_id=500) [2024-09-13 13:02:16.367278] INFO [SERVER.OMT] start (ob_multi_tenant.cpp:596) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] succ to start multi tenant [2024-09-13 13:02:16.367288] INFO [SERVER] start (ob_server.cpp:912) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] success to start multi tenant [2024-09-13 13:02:16.367284] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:92) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7D-0-0] [lt=14] server slog not finish replaying, need wait [2024-09-13 13:02:16.367296] INFO create_tg (thread_mgr.h:1003) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=297, tg=0x2b07b7591bb0, thread_cnt=1, tg->attr_={name:WrTimer, type:3}) [2024-09-13 13:02:16.367299] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:109) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7D-0-0] [lt=9] refresh tenant units(sys_unit_cnt=0, units=[], ret=-4036, ret="OB_NEED_RETRY") [2024-09-13 13:02:16.367306] INFO start (ob_wr_task.cpp:89) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=297, tg_name=WrTimer) [2024-09-13 13:02:16.367313] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:122) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7D-0-0] [lt=10] server slog not finish replaying, need wait [2024-09-13 13:02:16.367321] WDIAG [CLOG] try_resize (ob_server_log_block_mgr.cpp:789) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7D-0-0] [lt=7][errcode=-4250] ObServerLogBlockMgr not running, can not support resize(this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:0, status:0}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:16.367334] WDIAG [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:130) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7D-0-0] [lt=13][errcode=-4036] ObServerLogBlockMgr try_resize failed(tmp_ret=-4250) [2024-09-13 13:02:16.367342] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:133) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7D-0-0] [lt=8] refresh tenant config(tenants=[], ret=-4036) [2024-09-13 13:02:16.367470] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20104][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=974957576192) [2024-09-13 13:02:16.367567] INFO register_pm (ob_page_manager.cpp:40) [20104][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c2352340, pm.get_tid()=20104, tenant_id=500) [2024-09-13 13:02:16.367630] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07b7591bd0, thread_id=20104, lbt()=0x24edc06b 0x13836960 0x115a4182 0x1235a59b 0xb8f869f 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:16.367647] INFO [WR] start (ob_wr_task.cpp:93) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] init wr task thread finished(tg_id=297) [2024-09-13 13:02:16.367654] INFO [SERVER] start (ob_server.cpp:918) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] success to start wr service [2024-09-13 13:02:16.367669] INFO [STORAGE] get_meta_blocks (ob_linked_macro_block_reader.cpp:99) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=4] get meta blocks(macros_handle_={macro_id_list:[]}) [2024-09-13 13:02:16.367818] INFO [COMMON] get_file_id_range (ob_log_file_group.cpp:129) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=11] log dir is empty(ret=-4018, log_dir="/data1/oceanbase/data/slog/server") [2024-09-13 13:02:16.367835] WDIAG [STORAGE.REDO] replay (ob_storage_log_replayer.cpp:147) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=11][errcode=0] There is no redo log(replay_start_cursor=ObLogCursor{file_id=1, log_id=1, offset=0}) [2024-09-13 13:02:16.367944] INFO run1 (ob_timer.cpp:361) [20104][][T0][Y0-0000000000000000-0-0] [lt=14] timer thread started(this=0x2b07b7591bd0, tid=20104, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:16.367956] INFO [STORAGE.REDO] start_log (ob_storage_log_writer.cpp:175) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=12] slog writer start log(ret=0, start_cursor=ObLogCursor{file_id=1, log_id=1, offset=0}) [2024-09-13 13:02:16.367967] INFO [STORAGE] apply_replay_result (ob_server_checkpoint_slog_handler.cpp:811) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=9] finish replay create tenants(ret=0, tenant_count=0) [2024-09-13 13:02:16.368019] WDIAG [STORAGE.BLKMGR] get_next_block (ob_block_manager.cpp:1522) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=7][errcode=-4008] fail to get next block(ret=-4008) [2024-09-13 13:02:16.368127] INFO [SHARE] mark_blocks (ob_local_device.cpp:749) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=10] The local block device has been masked, (ret=0, free_block_cnt=10238, total_block_cnt_=10240) [2024-09-13 13:02:16.368180] INFO [STORAGE] finish_slog_replay (ob_server_checkpoint_slog_handler.cpp:247) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=9] finish slog replay(ret=0) [2024-09-13 13:02:16.368190] INFO [STORAGE] enable_replay_clog (ob_server_checkpoint_slog_handler.cpp:277) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=8] enable replay clog(ret=0) [2024-09-13 13:02:16.368199] INFO [STORAGE] start (ob_server_checkpoint_slog_handler.cpp:122) [19877][observer][T500][Y0-0000000000000001-0-0] [lt=7] succ to start server checkpoint slog handler [2024-09-13 13:02:16.368205] INFO [SERVER] start (ob_server.cpp:924) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] success to start server checkpoint slog handler [2024-09-13 13:02:16.368221] INFO [CLOG] check_space_is_enough_ (ob_server_log_block_mgr.cpp:814) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=3] check_space_is_enough_ finished(all_tenants_log_disk_size=0, log_disk_size=21474836480) [2024-09-13 13:02:16.368231] INFO [CLOG] update_checksum (ob_server_log_block_mgr.cpp:1529) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] update_checksum success(this={magic:19536, version:1, flag:0, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, checksum:1263619234}) [2024-09-13 13:02:16.377089] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:2420) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7] dump tenant info(tenant={id:508, tenant_meta:{unit:{tenant_id:508, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:5, max_cpu:5, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1726203736354211, is_removed:false}, super_block:{tenant_id:508, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:1}, unit_min_cpu:"5.000000000000000000e+00", unit_max_cpu:"5.000000000000000000e+00", total_worker_cnt:30, min_worker_cnt:22, max_worker_cnt:150, stopped:0, worker_us:0, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:40, workers:22, nesting workers:8, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:, token_change_ts:1726203736360714, tenant_role:0}) [2024-09-13 13:02:16.377590] INFO [SERVER.OMT] print_throttled_time (ob_tenant.cpp:1666) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=499] dump throttled time info(id_=508, throttled_time_log=tenant_id: 508, tenant_throttled_time: 0;) [2024-09-13 13:02:16.377606] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.377616] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.377619] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.387703] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.387724] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.387728] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.397805] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.397829] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.397834] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.407912] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.407942] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=27][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.407946] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.414196] INFO [CLOG] update_log_pool_meta_guarded_by_lock_ (ob_server_log_block_mgr.cpp:877) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] update_log_pool_meta_guarded_by_lock_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:16.415079] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:16.418019] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.418041] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.418046] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.418349] INFO [CLOG] make_resizing_tmp_dir_ (ob_server_log_block_mgr.cpp:1059) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=24] make_resizing_tmp_dir_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}, dir_path="/data1/oceanbase/data/clog/log_pool/expanding.tmp") [2024-09-13 13:02:16.422846] INFO [CLOG] allocate_block_at_tmp_dir_ (ob_server_log_block_mgr.cpp:1146) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] allocate_block_at_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:0, min_log_disk_size_for_all_tenants_:0, is_inited:true}, dir_fd=108, block_id=0) [2024-09-13 13:02:16.428119] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.428139] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.428143] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.438214] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.438236] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.438241] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.448313] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.448342] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=26][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.448347] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.457683] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=29] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:16.458416] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.458432] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.458455] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.468486] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.468508] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.468513] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.478586] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.478610] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.478617] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.488702] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.488724] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.488728] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.498814] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.498836] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.498844] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.508926] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.508945] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.508951] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.518995] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.519013] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.519018] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.529058] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.529079] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.529084] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.539149] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.539172] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.539177] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.546740] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] sock regist: 0x2b07b3e0de70 fd=110 [2024-09-13 13:02:16.546768] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=24] [ussl] accept new connection, fd:110, src_addr:172.16.51.38:48324 [2024-09-13 13:02:16.546793] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] auth mothod is NONE, the fd will be dispatched, fd:110, src_addr:172.16.51.38:48324 [2024-09-13 13:02:16.546822] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=27] PNIO dispatch fd to certain group, fd:110, gid:0x100000002 [2024-09-13 13:02:16.546866] INFO pkts_sk_init (pkts_sk_factory.h:23) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO set pkts_sk_t sock_id s=0x2b07b0a04808, s->id=65535 [2024-09-13 13:02:16.546891] INFO pkts_sk_new (pkts_sk_factory.h:51) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=24] PNIO sk_new: s=0x2b07b0a04808 [2024-09-13 13:02:16.546903] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock regist: 0x2b07b0a04808 fd=110 [2024-09-13 13:02:16.546912] INFO on_accept (listenfd.c:39) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO accept new connection, ns=0x2b07b0a04808, fd=fd:110:local:"172.16.51.38:48324":remote:"172.16.51.38:48324" [2024-09-13 13:02:16.546993] WDIAG [SERVER] deliver_rpc_request (ob_srv_deliver.cpp:602) [19932][pnio1][T0][YB42AC103326-00062119ED1D6A37-0-0] [lt=3][errcode=-5150] can't deliver request(req={packet:{hdr_:{checksum_:3218413070, pcode_:1815, hlen_:184, priority_:5, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:9000000, timestamp:1726203736545763, dst_cluster_id:-1, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62032756, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203736466826}, chid_:0, clen_:49, assemble:false, msg_count:0, payload:0}, type:0, group:0, sql_req_level:0, connection_phase:0, recv_timestamp_:1726203736546984, enqueue_timestamp_:0, request_arrival_time_:1726203736546984, trace_id_:Y0-0000000000000000-0-0}, ret=-5150) [2024-09-13 13:02:16.547095] WDIAG [SERVER] deliver (ob_srv_deliver.cpp:766) [19932][pnio1][T0][YB42AC103326-00062119ED1D6A37-0-0] [lt=39][errcode=-5150] deliver rpc request fail(&req=0x2b07c2404098, ret=-5150) [2024-09-13 13:02:16.547103] WDIAG listenfd_handle_event (listenfd.c:71) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=5][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:16.549246] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.549266] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.549271] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.559344] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.559367] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.559372] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.569453] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.569475] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.569480] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.579547] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.579563] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.579567] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.589645] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.589666] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.589670] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.599746] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.599767] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.599777] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.609815] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.609832] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.609836] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.612198] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=10] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:16.612348] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=9][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4006, dropped:125829, tid:19887}]) [2024-09-13 13:02:16.615476] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:16.619894] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.619914] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.619919] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.629957] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.629976] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.629980] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.640031] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.640062] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=28][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.640078] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.646456] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] sock regist: 0x2b07b3e0de70 fd=111 [2024-09-13 13:02:16.646480] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=20] [ussl] accept new connection, fd:111, src_addr:172.16.51.38:48326 [2024-09-13 13:02:16.646504] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] auth mothod is NONE, the fd will be dispatched, fd:111, src_addr:172.16.51.38:48326 [2024-09-13 13:02:16.646516] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=11] PNIO dispatch fd to certain group, fd:111, gid:0x100000000 [2024-09-13 13:02:16.646547] INFO pkts_sk_init (pkts_sk_factory.h:23) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=33] PNIO set pkts_sk_t sock_id s=0x2b07b0a05218, s->id=65535 [2024-09-13 13:02:16.646558] INFO pkts_sk_new (pkts_sk_factory.h:51) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=10] PNIO sk_new: s=0x2b07b0a05218 [2024-09-13 13:02:16.646569] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock regist: 0x2b07b0a05218 fd=111 [2024-09-13 13:02:16.646583] INFO on_accept (listenfd.c:39) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO accept new connection, ns=0x2b07b0a05218, fd=fd:111:local:"172.16.51.38:48326":remote:"172.16.51.38:48326" [2024-09-13 13:02:16.646707] WDIAG listenfd_handle_event (listenfd.c:71) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=9][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:16.650153] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.650178] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.650183] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.657795] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:16.660256] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.660282] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.660287] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.667902] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=10] [ussl] sock regist: 0x2b07b3e0de70 fd=112 [2024-09-13 13:02:16.667929] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=23] [ussl] accept new connection, fd:112, src_addr:172.16.51.38:48328 [2024-09-13 13:02:16.667959] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] auth mothod is NONE, the fd will be dispatched, fd:112, src_addr:172.16.51.38:48328 [2024-09-13 13:02:16.667973] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=13] PNIO dispatch fd to certain group, fd:112, gid:0x100000001 [2024-09-13 13:02:16.668012] INFO pkts_sk_init (pkts_sk_factory.h:23) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO set pkts_sk_t sock_id s=0x2b07b0a62048, s->id=65535 [2024-09-13 13:02:16.668025] INFO pkts_sk_new (pkts_sk_factory.h:51) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=12] PNIO sk_new: s=0x2b07b0a62048 [2024-09-13 13:02:16.668037] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock regist: 0x2b07b0a62048 fd=112 [2024-09-13 13:02:16.668045] INFO on_accept (listenfd.c:39) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO accept new connection, ns=0x2b07b0a62048, fd=fd:112:local:"172.16.51.38:48328":remote:"172.16.51.38:48328" [2024-09-13 13:02:16.668132] WDIAG listenfd_handle_event (listenfd.c:71) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:16.668591] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [19932][pnio1][T0][YB42AC103326-00062119D7143C57-0-0] [lt=4][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203736668194, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62032759, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203736667954}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:16.668623] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C57-0-0] [lt=31][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.669111] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C57-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.669355] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C58-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.669754] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C58-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.670359] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.670377] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.670382] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.670460] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C59-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.670798] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C59-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.672947] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5A-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.673358] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C5A-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.676396] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5B-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.676739] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C5B-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.680452] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.680471] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.680481] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.681093] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5C-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.681536] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C5C-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.686677] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5D-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.687290] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C5D-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.690551] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.690572] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.690577] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.693282] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5E-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.693714] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C5E-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.700647] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.700666] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.700671] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.700788] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5F-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.701223] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C5F-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.709022] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C60-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.709417] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C60-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.709642] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C61-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.709972] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C61-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.710641] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C62-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.710735] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.710751] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.710756] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.710990] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C62-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.713035] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C63-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.713387] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C63-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.716574] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C64-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.716969] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C64-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.718782] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C65-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.719240] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C65-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.720823] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.720850] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.720855] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.721308] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C66-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.721765] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C66-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.727093] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C67-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.727520] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C67-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.729471] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C68-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.729941] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C68-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.730891] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.730909] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.730914] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.733670] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C69-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.734590] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119ED82E19E-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.735317] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119ED82E19E-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.735358] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C69-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.736949] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C6A-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.737415] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C6A-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.737652] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C6B-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.738703] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C6B-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.738933] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C6C-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.739298] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C6C-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.740982] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.740996] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.741001] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.740999] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C6D-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.742309] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C6D-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.742560] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C6E-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.742976] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C6E-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.744824] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C6F-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.745391] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C6F-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.749559] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C70-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.750080] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C70-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.750366] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C71-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.750757] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C71-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.751068] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.751082] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.751086] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.753485] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C72-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.754335] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C72-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.755139] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C73-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.755621] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C73-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.760109] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C74-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.760670] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C74-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.761155] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.761174] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.761179] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.761716] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C75-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.762148] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C75-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.767254] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C76-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.767802] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C76-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.769287] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C77-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.769817] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C77-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.770768] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C78-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.771249] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.771264] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.771269] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.771290] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C78-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.777974] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C79-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.778564] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C79-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.781339] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.781352] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.781357] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.781759] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C7A-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.783127] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C7A-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.783388] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C7B-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.783803] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C7B-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.787461] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C7C-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.787944] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C7C-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.790743] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C7D-0-0] [lt=31][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.791239] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C7D-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.791422] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.791444] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.791449] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.791448] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C7E-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.791770] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C7E-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.792463] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C7F-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.792824] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C7F-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.795244] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C80-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.795890] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C80-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.796134] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C81-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.796519] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C81-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.797114] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C82-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.797654] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C82-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.797904] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C83-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.798241] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C83-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.798433] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C84-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.798751] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C84-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.801519] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.801537] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.801542] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.802979] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C85-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.803362] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C85-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.808564] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C86-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.809294] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C86-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.809587] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C87-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.810046] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C87-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.811609] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.811625] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.811630] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.813705] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C88-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.814250] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C88-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.815027] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C89-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.815595] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C89-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.815865] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:16.821704] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.821718] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.821723] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.822322] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C8A-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.822795] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C8A-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.823089] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C8B-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.823617] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C8B-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.823856] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C8C-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.824238] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C8C-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.831341] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C8D-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.831788] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.831798] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C8D-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.831801] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.831806] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.835561] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C8E-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.836010] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C8E-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.839520] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C8F-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.840026] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C8F-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.840913] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C90-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.841373] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C90-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.841882] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.841896] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.841900] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.849969] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C91-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.850516] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C91-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.850765] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C92-0-0] [lt=28][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.851209] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C92-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.851482] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C93-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.851807] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C93-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.851964] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.851979] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.851983] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.856084] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C94-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.856801] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C94-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.857913] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=31] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:16.862054] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.862070] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.862074] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.863452] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C95-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.863964] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C95-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.865635] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C96-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.866038] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C96-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.869664] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C97-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.870059] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C97-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.872144] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.872158] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.872163] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.873065] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:16.873223] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:16.873561] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C98-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.874040] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C98-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.874095] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:16.875978] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C99-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.876473] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C99-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.878126] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D8E48924-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.878528] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D8E48924-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.880069] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C9A-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.880545] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C9A-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.880853] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C9B-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.881372] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C9B-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.881796] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C9C-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.882237] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.882251] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.882255] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.882400] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143C9C-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.882599] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C9D-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.882945] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C9D-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.884111] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C9E-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.884581] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C9E-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.887662] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143C9F-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.888134] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143C9F-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.889680] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA0-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.890409] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CA0-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.890650] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA1-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.891405] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA1-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.892217] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA2-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.892320] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.892334] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.892339] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.892630] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CA2-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.892889] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA3-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.893246] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA3-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.897817] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA4-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.898319] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA4-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.899647] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA5-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.900137] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA5-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.902414] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.902429] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.902441] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.904374] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA6-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.904987] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CA6-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.905248] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA7-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.905579] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA7-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.912257] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CA8-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.912527] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.912547] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.912576] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=28][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.912742] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CA8-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.913002] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CA9-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.913395] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CA9-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.918272] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CAA-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.918684] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CAA-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.920182] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CAB-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.920622] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CAB-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.920837] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CAC-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.921306] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CAC-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.922651] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.922667] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.922673] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.930350] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CAD-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.930906] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CAD-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.932755] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.932773] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.932780] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.932867] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CAE-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.933220] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CAE-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.934628] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CAF-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.935107] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CAF-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.936982] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB0-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.937588] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CB0-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.938512] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB1-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.938936] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB1-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.940961] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CB2-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.941712] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB2-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.942851] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.942867] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.942872] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.946702] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CB3-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.947185] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB3-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.952641] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CB4-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.952952] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.952965] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.952974] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.953093] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB4-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.954424] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB5-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.954832] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CB5-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.955109] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CB6-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.955569] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB6-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.958278] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CB7-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.958644] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CB7-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.958986] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CB8-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:16.963046] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.963061] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.963065] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.973135] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.973148] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.973152] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.983219] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.983230] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.983234] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.993304] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.993319] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:16.993329] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.003401] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.003420] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.003426] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.013497] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.013513] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.013518] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.016229] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=44] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:17.023586] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.023602] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.023607] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.033678] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.033699] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.033704] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.043776] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.043789] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.043793] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.053867] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.053889] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.053894] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.058011] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:17.063967] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.063986] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.063994] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.074067] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.074089] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.074095] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.084166] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.084187] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.084193] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.093248] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.093274] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=15] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.093981] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.094268] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.094287] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.094295] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.094694] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=18] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.094715] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=16] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.095097] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.095216] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=22] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.095255] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.095312] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.104372] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.104392] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.104399] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.104692] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=1] PNIO [ratelimit] time: 1726203737104691, bytes: 812522, bw: 0.773888 MB/s, add_ts: 1001283, add_bytes: 812522 [2024-09-13 13:02:17.107580] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143CE3-0-0] [lt=19][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:17.107885] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143CE4-0-0] [lt=1][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:17.114479] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.114495] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.114500] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.117922] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=8] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:17.119507] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143CE5-0-0] [lt=0][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:17.121133] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=23] PNIO [ratelimit] time: 1726203737121131, bytes: 0, bw: 0.000000 MB/s, add_ts: 1006868, add_bytes: 0 [2024-09-13 13:02:17.124572] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.124589] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.124594] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.134666] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.134682] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.134687] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.144761] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.144782] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.144787] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.154858] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.154872] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.154889] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.164957] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.164978] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.164983] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.175071] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.175096] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.175101] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.185172] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.185196] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.185201] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.195272] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.195290] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.195294] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.205358] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.205378] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.205385] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.215460] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.215480] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.215486] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.216563] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:17.225554] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.225580] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.225585] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.235663] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.235682] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.235687] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.235715] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:17.235729] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:17.235741] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:17.235753] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:17.245760] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.245779] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.245785] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.255857] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.255884] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.255889] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.258096] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=15] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:17.265986] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.266012] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.266019] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.276092] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.276110] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.276115] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.286185] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.286205] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.286210] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.296278] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.296299] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.296303] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.306365] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.306385] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.306390] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.316458] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.316479] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.316483] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.326551] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.326569] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.326574] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.336651] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.336672] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.336678] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.346775] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.346800] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=22][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.346809] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.356894] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.356922] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.356930] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.367365] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.367387] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.367395] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.377473] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.377499] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.377514] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.387593] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.387618] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.387626] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:17.416995] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=19] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:17.458203] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:17.612968] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=62] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:17.617337] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:17.658324] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=30] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:17.713828] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=26][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-8004, dropped:342, tid:19932}]) [2024-09-13 13:02:17.718465] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D5D-0-0] [lt=2][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.719028] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D5D-0-0] [lt=1][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.723719] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D5E-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.724287] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D5E-0-0] [lt=1][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.724534] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D5F-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.724963] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D5F-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.727092] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D60-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.727581] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D60-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.728913] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D61-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.729456] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D61-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.734011] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D62-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.734471] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D62-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.740630] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D63-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.741720] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D63-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.753375] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D64-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.753915] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D64-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.754335] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D65-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.754863] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D65-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.765604] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D66-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.766143] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D66-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.766763] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D67-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.767156] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D67-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.767991] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D68-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.768461] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D68-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.771748] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D69-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.772143] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D69-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.772489] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D6A-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.772895] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D6A-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.781493] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D6B-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.782005] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D6B-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.797201] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D6C-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.797739] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D6C-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.798930] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D6D-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.799292] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D6D-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.808148] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D6E-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.808612] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D6E-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.810093] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D6F-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.810572] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D6F-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.812586] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D70-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.813062] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D70-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.813527] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D71-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.813923] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D71-0-0] [lt=38][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.813971] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=21][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4201, dropped:127, tid:20102}]) [2024-09-13 13:02:17.817677] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14379452826, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:17.819110] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D72-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.819595] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D72-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.820722] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=1][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.820746] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.820751] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.823058] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5343, clean_start_pos=0, clean_num=125829) [2024-09-13 13:02:17.830819] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.830839] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.830843] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.831357] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D73-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.831895] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D73-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.840891] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.840910] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.840924] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.845734] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D74-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.846319] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D74-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.849696] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D75-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.850296] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D75-0-0] [lt=29][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.850582] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D76-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.850929] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D76-0-0] [lt=26][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.850991] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.851006] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.851019] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.851160] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D77-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.851523] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D77-0-0] [lt=28][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.851748] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D78-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.852144] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D78-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.858304] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D79-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.858422] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:17.858789] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D79-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.861094] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.861118] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.861125] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.866804] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D7A-0-0] [lt=25][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.867536] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D7A-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.869659] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D7B-0-0] [lt=28][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.870091] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D7B-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.871199] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.871222] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.871229] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.872990] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.873165] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.873356] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:17.880000] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D8E48924-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.881305] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.881323] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.881342] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.881496] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D7C-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.882108] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D7C-0-0] [lt=26][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.882450] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D7D-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.882930] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D7D-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.883395] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D7E-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.883810] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D7E-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.885952] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D7F-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.886536] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D7F-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.889391] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D80-0-0] [lt=28][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.889964] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D80-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.890194] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D81-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.890693] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D81-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.891413] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.891430] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.891440] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.893150] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D82-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.893639] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D82-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.893942] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D83-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.894292] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D83-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.899502] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D84-0-0] [lt=28][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.899998] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D84-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.901514] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.901537] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.901541] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.905048] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D85-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.905647] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D85-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.906088] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D86-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.906510] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D86-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.911616] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.911638] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.911643] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.911796] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D87-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.912294] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D87-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.913517] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D88-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.913971] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D88-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.915285] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D89-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.915602] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D89-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.921716] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.921742] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.921747] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.922029] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D8A-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.922465] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D8A-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.931641] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D8B-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.931818] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.931836] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.931841] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.932121] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D8B-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.934351] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D8C-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.934712] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D8C-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.941752] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D8D-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.941888] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.941906] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.941913] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.942175] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D8D-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.942453] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D8E-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.942870] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D8E-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.947518] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D8F-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.947929] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D8F-0-0] [lt=26][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.951805] INFO [CLOG] fsync_after_rename_ (ob_server_log_block_mgr.cpp:961) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=21] fsync_after_rename_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}, dest_dir_fd=108) [2024-09-13 13:02:17.951834] INFO [CLOG] do_expand_ (ob_server_log_block_mgr.cpp:1033) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=28] do_expand_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:17.951936] INFO [CLOG] remove_resizing_tmp_dir_ (ob_server_log_block_mgr.cpp:1074) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] remove_resizing_tmp_dir_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}, dir_path="/data1/oceanbase/data/clog/log_pool/expanding.tmp") [2024-09-13 13:02:17.951995] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.952015] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.952023] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.953581] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D90-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.953938] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D90-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.954155] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D91-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.954471] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D91-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.957861] INFO [CLOG] do_resize_ (ob_server_log_block_mgr.cpp:981) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=15] do_expand or do_shrink success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:0, next_total_size:21474836480, status:1}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}, resize_block_cnt=320) [2024-09-13 13:02:17.957893] INFO [CLOG] update_checksum (ob_server_log_block_mgr.cpp:1529) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=29] update_checksum success(this={magic:19536, version:1, flag:0, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, checksum:3894483148}) [2024-09-13 13:02:17.958446] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143D92-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.958792] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D92-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.962097] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.962129] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=29][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.962136] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.964968] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D93-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.965498] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D93-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.966178] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D94-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.966236] INFO [CLOG] update_log_pool_meta_guarded_by_lock_ (ob_server_log_block_mgr.cpp:877) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] update_log_pool_meta_guarded_by_lock_ success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:17.966254] INFO [CLOG] resize_ (ob_server_log_block_mgr.cpp:234) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] resize success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}, new_size_byte=21474836480, aligned_new_size_byte=21474836480, old_block_cnt=0, new_block_cnt=320, cost_ts=1598025) [2024-09-13 13:02:17.966265] INFO [CLOG] start (ob_server_log_block_mgr.cpp:173) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObServerLogBlockMGR start success(ret=0, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}, new_size_byte=21474836480) [2024-09-13 13:02:17.966274] INFO [SERVER] start (ob_server.cpp:930) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] success to start log pool [2024-09-13 13:02:17.966358] INFO [SHARE] gen_sys_tenant_default_unit_resource (ob_unit_resource.cpp:717) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] gen_sys_tenant_default_unit_resource(ret=0, ret="OB_SUCCESS", is_hidden_sys=true, this={min_cpu:2, max_cpu:2, memory_size:"3GB", log_disk_size:"0GB", min_iops:9223372036854775807, max_iops:9223372036854775807, iops_weight:2}, lbt()="0x24edc06b 0x12b020e1 0x12af831c 0xb213c79 0xb214974 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:17.966501] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D94-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.966733] INFO [LIB] create_and_add_tenant_allocator (ob_malloc_allocator.cpp:261) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=31] tenant allocator already exists(ret=-4017, tenant_id=1) [2024-09-13 13:02:17.966753] INFO [SERVER.OMT] update_tenant_memory (ob_multi_tenant.cpp:1242) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] reduce memory quota(mem_limit=3221225472, pre_mem_limit=9223372036854775807, target_mem_limit=3221225472, mem_hold=37769216) [2024-09-13 13:02:17.966768] INFO [CLOG] create_tenant (ob_server_log_block_mgr.cpp:341) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] ObServerLogBlockMGR create_tenant success(this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}, log_disk_size=0) [2024-09-13 13:02:17.972207] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.972228] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.972235] WDIAG [SERVER.OMT] get_tenant_config_with_lock (ob_tenant_config_mgr.cpp:369) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4201] failed to get fallback tenant config(fallback_tenant_id=1, ret=-4201) [2024-09-13 13:02:17.973788] WDIAG [LIB] ~ObTimeGuard (utility.h:890) [20068][IO_SCHEDULE14][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4389] destruct(*this=time guard 'LocalDevice' cost too much time, used=6591, time_dist: LocalDevice_submit=6578) [2024-09-13 13:02:17.973841] INFO [STORAGE.REDO] notify_flush (ob_storage_log_writer.cpp:552) [20010][OB_SLOG][T0][Y0-0000000000000000-0-0] [lt=17] Successfully flush(log_item={start_cursor:ObLogCursor{file_id=1, log_id=1, offset=0}, end_cursor:ObLogCursor{file_id=1, log_id=2, offset=266}, is_inited:true, is_local:false, buf_size:8192, buf:0x2b079e862050, len:4096, log_data_len:266, seq:1, flush_finish:false, flush_ret:0}) [2024-09-13 13:02:17.974299] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D95-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.974650] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D95-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.976844] INFO [SERVER.OMT] add_tenant_config (ob_tenant_config_mgr.cpp:309) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] tenant config added(tenant_id=1, ret=0) [2024-09-13 13:02:17.978842] INFO [SERVER.OMT] construct_mtl_init_ctx (ob_tenant.cpp:887) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=21] construct_mtl_init_ctx success(palf_options={log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}) [2024-09-13 13:02:17.978867] INFO [SERVER.OMT] create_tenant_module (ob_tenant.cpp:993) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] begin create mtl module>>>>(tenant_id=1, MTL_ID()=1) [2024-09-13 13:02:17.978890] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:129) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] create_mtl_module(id_=1) [2024-09-13 13:02:17.978907] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl1(type="PN9oceanbase3omt13ObSharedTimerE", mtl_ptr=0x2b07a0c31ca0) [2024-09-13 13:02:17.978947] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=253664, isize_=63288, slice_limit_=253264, tmallocator_=NULL) [2024-09-13 13:02:17.978963] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish create mtl2(type="PN9oceanbase3sql21ObTenantSQLSessionMgrE", mtl_ptr=0x2b07a0cf4030) [2024-09-13 13:02:17.979532] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] Construction ObResourcePool this=0x2b07a0d0d070 type=N9oceanbase8memtable10ObMemtableE allocator=0x2b07a0d0d2b0 free_list=0x2b07a0d0d0f0 ret=0 bt=0x24edc06b 0xf65a7b2 0xf5ef18a 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.979735] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D96-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.979754] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] Construction ObResourcePool this=0x2b07a0d0d4b0 type=N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE allocator=0x2b07a0d0d6f0 free_list=0x2b07a0d0d530 ret=0 bt=0x24edc06b 0xf65c038 0xf5ef72f 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.979790] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] Construction ObResourcePool this=0x2b07a0d0d8f0 type=N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE allocator=0x2b07a0d0db30 free_list=0x2b07a0d0d970 ret=0 bt=0x24edc06b 0xf65e188 0xf5efd7b 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.979836] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] Construction ObResourcePool this=0x2b07a0d0dd30 type=N9oceanbase7storage7ObDDLKVE allocator=0x2b07a0d0df70 free_list=0x2b07a0d0ddb0 ret=0 bt=0x24edc06b 0xf660102 0xf5f032a 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.980067] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] Construction ObResourcePool this=0x2b07a0d0e170 type=N9oceanbase7storage16ObTabletDDLKvMgrE allocator=0x2b07a0d0e3b0 free_list=0x2b07a0d0e1f0 ret=0 bt=0x24edc06b 0xf6617e8 0xf5f08c1 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.980139] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D96-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.980331] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] Construction ObResourcePool this=0x2b07a0d0e5b0 type=N9oceanbase7storage19ObTabletMemtableMgrE allocator=0x2b07a0d0e7f0 free_list=0x2b07a0d0e630 ret=0 bt=0x24edc06b 0xf662ed2 0xf5f0e5f 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.980359] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] Construction ObResourcePool this=0x2b07a0d0e9f0 type=N9oceanbase7storage16ObTxDataMemtableE allocator=0x2b07a0d0ec30 free_list=0x2b07a0d0ea70 ret=0 bt=0x24edc06b 0xf6645d2 0xf5f1399 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.980385] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] Construction ObResourcePool this=0x2b07a0d0ee30 type=N9oceanbase7storage15ObTxCtxMemtableE allocator=0x2b07a0d0f070 free_list=0x2b07a0d0eeb0 ret=0 bt=0x24edc06b 0xf668092 0xf5f18cc 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.980410] INFO [COMMON] ObBaseResourcePool (ob_resource_pool.h:122) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] Construction ObResourcePool this=0x2b07a0d0f270 type=N9oceanbase11transaction9tablelock14ObLockMemtableE allocator=0x2b07a0d0f4b0 free_list=0x2b07a0d0f2f0 ret=0 bt=0x24edc06b 0xf669922 0xf5f1dfb 0xf5f5e25 0x11a5eca8 0xb216e27 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75 [2024-09-13 13:02:17.980424] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl3(type="PN9oceanbase7storage18ObTenantMetaMemMgrE", mtl_ptr=0x2b07a0cfc030) [2024-09-13 13:02:17.982968] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl4(type="PN9oceanbase6common18ObServerObjectPoolINS_11transaction14ObPartTransCtxEEE", mtl_ptr=0x2b07a0c31d70) [2024-09-13 13:02:17.983055] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D97-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.983383] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D97-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.983907] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] finish create mtl5(type="PN9oceanbase6common18ObServerObjectPoolINS_7storage19ObTableScanIteratorEEE", mtl_ptr=0x2b07a0de6230) [2024-09-13 13:02:17.983941] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish create mtl6(type="PN9oceanbase6common17ObTenantIOManagerE", mtl_ptr=0x2b07a0de8030) [2024-09-13 13:02:17.983960] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] task handle constructed(*this={this:0x2b07a0de6580, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.983971] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] task handle constructed(*this={this:0x2b07a0de6680, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.984026] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObSliceAlloc init finished(bsize_=7936, isize_=128, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.984039] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish create mtl7(type="PN9oceanbase7storage3mds18ObTenantMdsServiceE", mtl_ptr=0x2b07a0de6540) [2024-09-13 13:02:17.984047] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl8(type="PN9oceanbase7storage15ObStorageLoggerE", mtl_ptr=0x2b07a0de6c40) [2024-09-13 13:02:17.984084] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObSliceAlloc init finished(bsize_=7936, isize_=40, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.984091] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl9(type="PN9oceanbase12blocksstable21ObSharedMacroBlockMgrE", mtl_ptr=0x2b07a0dee030) [2024-09-13 13:02:17.984186] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl10(type="PN9oceanbase5share19ObSharedMemAllocMgrE", mtl_ptr=0x2b07c337e030) [2024-09-13 13:02:17.985568] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl11(type="PN9oceanbase11transaction14ObTransServiceE", mtl_ptr=0x2b07c3a04030) [2024-09-13 13:02:17.985593] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] task handle constructed(*this={this:0x2b07a0dee730, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.985600] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl12(type="PN9oceanbase10logservice11coordinator19ObLeaderCoordinatorE", mtl_ptr=0x2b07a0dee6b0) [2024-09-13 13:02:17.985608] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] task handle constructed(*this={this:0x2b07a0deeaf0, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.985613] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] task handle constructed(*this={this:0x2b07a0deebf0, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.985618] INFO [COORDINATOR] ObFailureDetector (ob_failure_detector.cpp:55) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObFailureDetector constructed [2024-09-13 13:02:17.985630] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish create mtl13(type="PN9oceanbase10logservice11coordinator17ObFailureDetectorE", mtl_ptr=0x2b07a0dee9f0) [2024-09-13 13:02:17.985694] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.985727] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.986868] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl14(type="PN9oceanbase10logservice12ObLogServiceE", mtl_ptr=0x2b07c24c8030) [2024-09-13 13:02:17.986901] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=25] finish create mtl15(type="PN9oceanbase10logservice18ObGarbageCollectorE", mtl_ptr=0x2b07a0deeeb0) [2024-09-13 13:02:17.986923] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl16(type="PN9oceanbase7storage11ObLSServiceE", mtl_ptr=0x2b07a0df0030) [2024-09-13 13:02:17.986993] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish create mtl17(type="PN9oceanbase7storage29ObTenantCheckpointSlogHandlerE", mtl_ptr=0x2b07a0def2b0) [2024-09-13 13:02:17.987006] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl18(type="PN9oceanbase10compaction29ObTenantCompactionProgressMgrE", mtl_ptr=0x2b07a0defab0) [2024-09-13 13:02:17.987014] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl19(type="PN9oceanbase10compaction30ObServerCompactionEventHistoryE", mtl_ptr=0x2b07a0defd20) [2024-09-13 13:02:17.989297] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish create mtl20(type="PN9oceanbase7storage21ObTenantTabletStatMgrE", mtl_ptr=0x2b07c5004030) [2024-09-13 13:02:17.989541] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=21] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.989635] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish create mtl21(type="PN9oceanbase8memtable13ObLockWaitMgrE", mtl_ptr=0x2b07c3804030) [2024-09-13 13:02:17.989648] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] task handle constructed(*this={this:0x2b07a0de7ac0, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.989660] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl22(type="PN9oceanbase11transaction9tablelock18ObTableLockServiceE", mtl_ptr=0x2b07a0de7a00) [2024-09-13 13:02:17.989669] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl23(type="PN9oceanbase10rootserver27ObPrimaryMajorFreezeServiceE", mtl_ptr=0x2b07a0de7d00) [2024-09-13 13:02:17.989674] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl24(type="PN9oceanbase10rootserver27ObRestoreMajorFreezeServiceE", mtl_ptr=0x2b07a0de7e20) [2024-09-13 13:02:17.989679] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl25(type="PN9oceanbase8observer19ObTenantMetaCheckerE", mtl_ptr=0x2b07a0defec0) [2024-09-13 13:02:17.989684] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl28(type="PN9oceanbase10rootserver18ObTenantInfoLoaderE", mtl_ptr=0x2b07a0df19f0) [2024-09-13 13:02:17.989697] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl29(type="PN9oceanbase10rootserver27ObCreateStandbyFromNetActorE", mtl_ptr=0x2b07a0df1cf0) [2024-09-13 13:02:17.989702] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl30(type="PN9oceanbase10rootserver29ObStandbySchemaRefreshTriggerE", mtl_ptr=0x2b07a0df1e80) [2024-09-13 13:02:17.989706] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl31(type="PN9oceanbase10rootserver20ObLSRecoveryReportorE", mtl_ptr=0x2b07a0c31e80) [2024-09-13 13:02:17.989723] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl32(type="PN9oceanbase10rootserver17ObCommonLSServiceE", mtl_ptr=0x2b07c25e8030) [2024-09-13 13:02:17.989732] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl33(type="PN9oceanbase10rootserver18ObPrimaryLSServiceE", mtl_ptr=0x2b07c25fa030) [2024-09-13 13:02:17.989736] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl34(type="PN9oceanbase10rootserver27ObBalanceTaskExecuteServiceE", mtl_ptr=0x2b07c25fa1b0) [2024-09-13 13:02:17.989744] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl35(type="PN9oceanbase10rootserver19ObRecoveryLSServiceE", mtl_ptr=0x2b07c25fa400) [2024-09-13 13:02:17.989759] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl36(type="PN9oceanbase10rootserver16ObRestoreServiceE", mtl_ptr=0x2b07c25fa5f0) [2024-09-13 13:02:17.989765] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl37(type="PN9oceanbase10rootserver22ObTenantBalanceServiceE", mtl_ptr=0x2b07c25fac20) [2024-09-13 13:02:17.989784] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl38(type="PN9oceanbase10rootserver21ObBackupTaskSchedulerE", mtl_ptr=0x2b07c25fae90) [2024-09-13 13:02:17.989791] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl39(type="PN9oceanbase10rootserver19ObBackupDataServiceE", mtl_ptr=0x2b07c25fb7d0) [2024-09-13 13:02:17.989796] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl40(type="PN9oceanbase10rootserver20ObBackupCleanServiceE", mtl_ptr=0x2b07c25fb9c0) [2024-09-13 13:02:17.989805] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl41(type="PN9oceanbase10rootserver25ObArchiveSchedulerServiceE", mtl_ptr=0x2b07c25fbd70) [2024-09-13 13:02:17.989821] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl42(type="PN9oceanbase7storage27ObTenantSSTableMergeInfoMgrE", mtl_ptr=0x2b07c25fc030) [2024-09-13 13:02:17.989828] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl43(type="PN9oceanbase5share26ObDagWarningHistoryManagerE", mtl_ptr=0x2b07c25fc7b0) [2024-09-13 13:02:17.989834] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl44(type="PN9oceanbase10compaction24ObScheduleSuspectInfoMgrE", mtl_ptr=0x2b07c25fccb0) [2024-09-13 13:02:17.989844] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl45(type="PN9oceanbase7storage12ObLobManagerE", mtl_ptr=0x2b07b75956c0) [2024-09-13 13:02:17.989963] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish create mtl46(type="PN9oceanbase5share22ObGlobalAutoIncServiceE", mtl_ptr=0x2b07c33a2030) [2024-09-13 13:02:17.990028] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObSliceAlloc init finished(bsize_=7936, isize_=576, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.990133] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl47(type="PN9oceanbase5share8detector21ObDeadLockDetectorMgrE", mtl_ptr=0x2b07c25fe030) [2024-09-13 13:02:17.991464] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D98-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.991934] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D98-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.992285] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl48(type="PN9oceanbase11transaction11ObXAServiceE", mtl_ptr=0x2b07c5404030) [2024-09-13 13:02:17.992341] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=49] finish create mtl49(type="PN9oceanbase11transaction18ObTimestampServiceE", mtl_ptr=0x2b07c25fd0b0) [2024-09-13 13:02:17.992371] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=28] finish create mtl50(type="PN9oceanbase11transaction25ObStandbyTimestampServiceE", mtl_ptr=0x2b07c25fd860) [2024-09-13 13:02:17.992377] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl51(type="PN9oceanbase11transaction17ObTimestampAccessE", mtl_ptr=0x2b07c25fdf00) [2024-09-13 13:02:17.992387] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl52(type="PN9oceanbase11transaction16ObTransIDServiceE", mtl_ptr=0x2b07a0df2030) [2024-09-13 13:02:17.992399] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish create mtl53(type="PN9oceanbase11transaction17ObUniqueIDServiceE", mtl_ptr=0x2b07a0df2270) [2024-09-13 13:02:17.992420] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl55(type="PN9oceanbase3sql9ObPsCacheE", mtl_ptr=0x2b07a0df2340) [2024-09-13 13:02:17.992454] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=24] finish create mtl56(type="PN9oceanbase3sql11ObPlanCacheE", mtl_ptr=0x2b07a0df2780) [2024-09-13 13:02:17.992467] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl57(type="PN9oceanbase6common15ObDetectManagerE", mtl_ptr=0x2b07a0df2d30) [2024-09-13 13:02:17.992485] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl58(type="PN9oceanbase3sql3dtl11ObTenantDfcE", mtl_ptr=0x2b07a0df4030) [2024-09-13 13:02:17.992497] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl59(type="PN9oceanbase3omt9ObPxPoolsE", mtl_ptr=0x2b07a0df4f70) [2024-09-13 13:02:17.993817] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl61(type="PN9oceanbase7obmysql21ObMySQLRequestManagerE", mtl_ptr=0x2b07c386e030) [2024-09-13 13:02:17.993840] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] finish create mtl62(type="PN9oceanbase11transaction23ObTenantWeakReadServiceE", mtl_ptr=0x2b07a0df5120) [2024-09-13 13:02:17.993853] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl63(type="PN9oceanbase3sql24ObTenantSqlMemoryManagerE", mtl_ptr=NULL) [2024-09-13 13:02:17.993860] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl64(type="PN9oceanbase3sql3dtl24ObDTLIntermResultManagerE", mtl_ptr=0x2b07a0df5960) [2024-09-13 13:02:17.993886] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl65(type="PN9oceanbase3sql21ObPlanMonitorNodeListE", mtl_ptr=0x2b07a0df6030) [2024-09-13 13:02:17.994077] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl66(type="PN9oceanbase3sql19ObDataAccessServiceE", mtl_ptr=0x2b07a0df31c0) [2024-09-13 13:02:17.994092] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] finish create mtl67(type="PN9oceanbase3sql14ObDASIDServiceE", mtl_ptr=0x2b07a0df3d40) [2024-09-13 13:02:17.994101] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl68(type="PN9oceanbase5share6schema21ObTenantSchemaServiceE", mtl_ptr=0x2b07a0de7f40) [2024-09-13 13:02:17.994115] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] task handle constructed(*this={this:0x2b07a0dfabb0, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.994148] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl69(type="PN9oceanbase7storage15ObTenantFreezerE", mtl_ptr=0x2b07a0dfa030) [2024-09-13 13:02:17.994159] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl70(type="PN9oceanbase7storage10checkpoint19ObCheckPointServiceE", mtl_ptr=0x2b07c33d4030) [2024-09-13 13:02:17.994164] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish create mtl71(type="PN9oceanbase7storage10checkpoint17ObTabletGCServiceE", mtl_ptr=0x2b07c33d45f0) [2024-09-13 13:02:17.994283] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D99-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.994563] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl72(type="PN9oceanbase7archive16ObArchiveServiceE", mtl_ptr=0x2b07c33d6030) [2024-09-13 13:02:17.994609] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish create mtl73(type="PN9oceanbase7storage23ObTenantTabletSchedulerE", mtl_ptr=0x2b07c33e4030) [2024-09-13 13:02:17.994632] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish create mtl74(type="PN9oceanbase5share20ObTenantDagSchedulerE", mtl_ptr=0x2b07c33d4900) [2024-09-13 13:02:17.994643] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish create mtl75(type="PN9oceanbase7storage18ObStorageHAServiceE", mtl_ptr=0x2b07c25ffe30) [2024-09-13 13:02:17.994661] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish create mtl76(type="PN9oceanbase7storage21ObTenantFreezeInfoMgrE", mtl_ptr=0x2b07c33ee030) [2024-09-13 13:02:17.994670] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl77(type="PN9oceanbase11transaction14ObTxLoopWorkerE", mtl_ptr=0x2b07c33eeaf0) [2024-09-13 13:02:17.994676] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish create mtl78(type="PN9oceanbase7storage15ObAccessServiceE", mtl_ptr=0x2b07c33eec10) [2024-09-13 13:02:17.994687] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl79(type="PN9oceanbase7storage17ObTransferServiceE", mtl_ptr=0x2b07c33eecf0) [2024-09-13 13:02:17.994700] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish create mtl80(type="PN9oceanbase10rootserver23ObTenantTransferServiceE", mtl_ptr=0x2b07c33eeef0) [2024-09-13 13:02:17.994711] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl81(type="PN9oceanbase7storage16ObRebuildServiceE", mtl_ptr=0x2b07c33ef080) [2024-09-13 13:02:17.994727] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl82(type="PN9oceanbase8datadict17ObDataDictServiceE", mtl_ptr=0x2b07c33ef2f0) [2024-09-13 13:02:17.994811] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D99-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:17.994871] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl83(type="PN9oceanbase8observer18ObTableLoadServiceE", mtl_ptr=0x2b07c33f0030) [2024-09-13 13:02:17.994889] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] finish create mtl84(type="PN9oceanbase8observer26ObTableLoadResourceServiceE", mtl_ptr=0x2b07c33f0e70) [2024-09-13 13:02:17.994896] INFO [OCCAM] ObOccamTimerTaskRAIIHandle (ob_occam_timer.h:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] task handle constructed(*this={this:0x2b07c33f0fc0, task:NULL, is_inited:false, is_running:false}) [2024-09-13 13:02:17.994904] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish create mtl85(type="PN9oceanbase19concurrency_control30ObMultiVersionGarbageCollectorE", mtl_ptr=0x2b07c33f0f80) [2024-09-13 13:02:17.994911] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl86(type="PN9oceanbase3sql8ObUDRMgrE", mtl_ptr=0x2b07c33f11c0) [2024-09-13 13:02:17.994925] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl87(type="PN9oceanbase3sql12ObFLTSpanMgrE", mtl_ptr=0x2b07c33f2030) [2024-09-13 13:02:17.994936] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl89(type="PN9oceanbase10rootserver18ObHeartbeatServiceE", mtl_ptr=0x2b07c33f14c0) [2024-09-13 13:02:17.994945] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl90(type="PN9oceanbase6common23ObOptStatMonitorManagerE", mtl_ptr=0x2b07c33f18b0) [2024-09-13 13:02:17.994956] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish create mtl91(type="PN9oceanbase3omt11ObTenantSrsE", mtl_ptr=0x2b07c33d5780) [2024-09-13 13:02:17.995005] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.995034] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish create mtl92(type="PN9oceanbase5table15ObHTableLockMgrE", mtl_ptr=0x2b07c33f6030) [2024-09-13 13:02:17.995045] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish create mtl93(type="PN9oceanbase5table12ObTTLServiceE", mtl_ptr=0x2b07c33d5e00) [2024-09-13 13:02:17.995054] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl94(type="PN9oceanbase5table21ObTableApiSessPoolMgrE", mtl_ptr=0x2b07a0df5da0) [2024-09-13 13:02:17.995486] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish create mtl95(type="PN9oceanbase7storage10checkpoint23ObCheckpointDiagnoseMgrE", mtl_ptr=0x2b07c5804030) [2024-09-13 13:02:17.995497] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish create mtl96(type="PN9oceanbase7storage18ObStorageHADiagMgrE", mtl_ptr=0x2b07c33f1ba0) [2024-09-13 13:02:17.995504] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish create mtl97(type="PN9oceanbase5share19ObIndexUsageInfoMgrE", mtl_ptr=0x2b07c33ef560) [2024-09-13 13:02:17.995509] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish create mtl98(type="PN9oceanbase5share25ObResourceLimitCalculatorE", mtl_ptr=0x2b07c25fbf40) [2024-09-13 13:02:17.995523] INFO [SHARE] create_mtl_module (ob_tenant_base.cpp:141) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish create mtl99(type="PN9oceanbase5table21ObTableGroupCommitMgrE", mtl_ptr=0x2b07c3992030) [2024-09-13 13:02:17.995534] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:153) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] init_mtl_module(id_=1) [2024-09-13 13:02:17.995564] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=298, tg=0x2b07b7591da0, thread_cnt=1, tg->attr_={name:TntSharedTimer, type:3}, tg=0x2b07b7591da0) [2024-09-13 13:02:17.995582] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish init mtl1(cost_time_us=37, type="PN9oceanbase3omt13ObSharedTimerE") [2024-09-13 13:02:17.995588] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish init mtl2(cost_time_us=0, type="PN9oceanbase3sql21ObTenantSQLSessionMgrE") [2024-09-13 13:02:17.995603] INFO [STORAGE] cal_adaptive_bucket_num (ob_tenant_meta_mem_mgr.cpp:1795) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] cal adaptive bucket num(mem_limit=3221225472, min_bkt_cnt=10243, max_bkt_cnt=1000000, tablet_bucket_num=150000, bucket_num=196613) [2024-09-13 13:02:17.996780] INFO [STORAGE] init (ob_resource_map.h:237) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] init resource map success(ret=0, attr=tenant_id=1, label=TabletMap, ctx_id=0, prio=0, bkt_num=196613) [2024-09-13 13:02:17.996802] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] create tg succeed(tg_id=299, tg=0x2b07bf1dd2b0, thread_cnt=1, tg->attr_={name:TenantMetaMemMgr, type:3}, tg=0x2b07bf1dd2b0) [2024-09-13 13:02:17.996810] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl3(cost_time_us=1218, type="PN9oceanbase7storage18ObTenantMetaMemMgrE") [2024-09-13 13:02:17.996818] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl4(cost_time_us=0, type="PN9oceanbase6common18ObServerObjectPoolINS_11transaction14ObPartTransCtxEEE") [2024-09-13 13:02:17.996821] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish init mtl5(cost_time_us=0, type="PN9oceanbase6common18ObServerObjectPoolINS_7storage19ObTableScanIteratorEEE") [2024-09-13 13:02:17.996851] INFO [COMMON] init_macro_pool (ob_io_struct.cpp:352) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] succ to init io macro pool(memory_limit=536870912, block_count=2) [2024-09-13 13:02:17.996946] INFO [COMMON] init (ob_io_mclock.cpp:226) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] init other group clock success(i=0, unit_config={min_iops:10000, max_iops:50000, weight:10000}, cur_config={deleted:false, cleared:false, min_percent:100, max_percent:100, weight_percent:100}) [2024-09-13 13:02:17.997034] INFO [COMMON] mtl_init (ob_io_manager.cpp:574) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] mtl init tenant io manager success(tenant_id=1, io_service={is_inited_:true, ref_cnt_:1, tenant_id_:1, io_config_:{group_num_:0, memory_limit_:536870912, callback_thread_count_:8, unit_config_:{min_iops:10000, max_iops:50000, weight:10000}, enable_io_tracer:false, group_configs:[other_groups:{deleted:false, cleared:false, min_percent:100, max_percent:100, weight_percent:100}]}, io_clock_:{is_inited_:true, group_clocks:[], other_clock:{is_inited_:true, is_stopped_:false, reservation_clock:{iops:10000, last_ns:0}, is_unlimited:false, limitation_clock:{iops:50000, last_ns:0}, proportion_clock:{iops:10000, last_ns:0}}, unit_clock:{iops:50000, last_ns:0}, io_config_:{group_num_:0, memory_limit_:536870912, callback_thread_count_:8, unit_config_:{min_iops:10000, max_iops:50000, weight:10000}, enable_io_tracer:false, group_configs:[other_groups:{deleted:false, cleared:false, min_percent:100, max_percent:100, weight_percent:100}]}, io_usage_:{doing_request_count:[0:0]}}, io_allocator_:{is_inited_:true, allocated:4268192}, io_scheduler_:{is_inited_:true, io_config_:{write_failure_detect_interval_:60000000, read_failure_black_list_interval_:60000000, data_storage_warning_tolerance_time_:5000000, data_storage_error_tolerance_time_:300000000, disk_io_thread_count_:8, data_storage_io_timeout_ms_:120000}, senders_:[{is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:256, sender_index_:1}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:257, sender_index_:2}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:258, sender_index_:3}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:259, sender_index_:4}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:260, sender_index_:5}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:261, sender_index_:6}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:262, sender_index_:7}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:263, sender_index_:8}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:264, sender_index_:9}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:265, sender_index_:10}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:266, sender_index_:11}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:267, sender_index_:12}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:268, sender_index_:13}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:269, sender_index_:14}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:270, sender_index_:15}, {is_inited_:true, stop_submit_:false, io_queue_:{is_inited_:true, r_heap_.count():2, gl_heap_.count():2, tl_heap_.count():0, ready_heap_.count():0}, tg_id_:271, sender_index_:16}]}, callback_mgr_:{is_inited_:false, config_thread_count_:0, queue_depth_:0, runners_:[], io_allocator_:NULL}}) [2024-09-13 13:02:17.997137] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=102] finish init mtl6(cost_time_us=311, type="PN9oceanbase6common17ObTenantIOManagerE") [2024-09-13 13:02:17.997580] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20107][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=979252543488) [2024-09-13 13:02:17.997710] INFO register_pm (ob_page_manager.cpp:40) [20107][][T0][Y0-0000000000000000-0-0] [lt=28] register pm finish(ret=0, &pm=0x2b07c6256340, pm.get_tid()=20107, tenant_id=500) [2024-09-13 13:02:17.997927] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20107][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=1) [2024-09-13 13:02:17.997926] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] init thread success(this=0x2b07baf6c030, id=4, ret=0) [2024-09-13 13:02:17.997941] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20107][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] thread is running function [2024-09-13 13:02:17.997975] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0x10c3209c 0x11a7044a 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:17.998355] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] TimeWheelBase inited success(precision=100000, start_ticket=17262037379, scan_ticket=17262037379) [2024-09-13 13:02:17.998366] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObTimeWheel init success(precision=100000, real_thread_num=1) [2024-09-13 13:02:17.998548] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20108][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=983547510784) [2024-09-13 13:02:17.998674] INFO register_pm (ob_page_manager.cpp:40) [20108][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07c62d4340, pm.get_tid()=20108, tenant_id=500) [2024-09-13 13:02:17.998702] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTimeWheel start success(timer_name="MdsT") [2024-09-13 13:02:17.998703] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20108][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=2) [2024-09-13 13:02:17.998711] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init ObOccamTimer success(ret=0) [2024-09-13 13:02:17.998723] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl7(cost_time_us=1573, type="PN9oceanbase7storage3mds18ObTenantMdsServiceE") [2024-09-13 13:02:17.998846] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=300, tg=0x2b07bf1f7ec0, thread_cnt=1, tg->attr_={name:StorageLogWriter, type:2}, tg=0x2b07bf1f7ec0) [2024-09-13 13:02:17.998861] INFO [STORAGE.REDO] init (ob_storage_log_writer.cpp:103) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] Successfully init slog writer(ret=0, log_dir=0x2b07a0de7400, log_file_size=67108864, max_log_size=8192, log_file_spec={retry_write_policy:"normal", log_create_policy:"normal", log_write_policy:"truncate"}) [2024-09-13 13:02:17.998873] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl8(cost_time_us=142, type="PN9oceanbase7storage15ObStorageLoggerE") [2024-09-13 13:02:17.998944] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] create tg succeed(tg_id=301, tg=0x2b07bf1df2b0, thread_cnt=1, tg->attr_={name:SSTableDefragment, type:3}, tg=0x2b07bf1df2b0) [2024-09-13 13:02:17.998956] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl9(cost_time_us=63, type="PN9oceanbase12blocksstable21ObSharedMacroBlockMgrE") [2024-09-13 13:02:17.998979] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObSliceAlloc init finished(bsize_=7936, isize_=128, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:17.999008] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=2.457599997520446777e+02, this=0x2b07c338c170, enable_adaptive_limit_=false, Unit Name=Memstore, Config Specify Resource Limit(MB)=0, Resource Limit(MB)=1228, Throttle Trigger(MB)=737, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=7.831060060963772607e+00) [2024-09-13 13:02:17.999035] INFO [SHARE] init (ob_throttle_unit.ipp:61) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=26] [Throttle]Init throttle config finish(tenant_id_=1, unit_name_=Memstore, resource_limit_=1288490188, config_specify_resource_limit_=1288490188, throttle_trigger_percentage_=60, throttle_max_duration_=7200000000) [2024-09-13 13:02:17.999050] INFO [SHARE] init_one_ (ob_share_resource_throttle_tool.ipp:85) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] init one allocator for throttle finish(ret=0, ret="OB_SUCCESS", mtt={allocator_:0x2b07c339a2b0, module_throttle_unit_:{unit_name_:Memstore, is_inited_:true, enable_adaptive_limit_:false, config_specify_resource_limit_:1288490188, resource_limit_:1288490188, sequence_num_:0, clock_:0, pre_clock_:0, throttle_trigger_percentage_:60, throttle_max_duration_:7200000000, last_advance_clock_ts_us_:0, last_print_throttle_info_ts_:0, last_update_limit_ts_:0, decay_factor_:"7.831060060963772607e+00"}}) [2024-09-13 13:02:17.999076] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=3.247203087197580680e+04, this=0x2b07c33850d8, enable_adaptive_limit_=false, Unit Name=TxData, Config Specify Resource Limit(MB)=0, Resource Limit(MB)=614, Throttle Trigger(MB)=368, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=2.590156323699679671e-08) [2024-09-13 13:02:17.999091] INFO [SHARE] init (ob_throttle_unit.ipp:61) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] [Throttle]Init throttle config finish(tenant_id_=1, unit_name_=TxData, resource_limit_=644245094, config_specify_resource_limit_=644245094, throttle_trigger_percentage_=60, throttle_max_duration_=7200000000) [2024-09-13 13:02:17.999098] INFO [SHARE] init_one_ (ob_share_resource_throttle_tool.ipp:85) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init one allocator for throttle finish(ret=0, ret="OB_SUCCESS", mtt={allocator_:0x2b07c339b470, module_throttle_unit_:{unit_name_:TxData, is_inited_:true, enable_adaptive_limit_:false, config_specify_resource_limit_:644245094, resource_limit_:644245094, sequence_num_:0, clock_:0, pre_clock_:0, throttle_trigger_percentage_:60, throttle_max_duration_:7200000000, last_advance_clock_ts_us_:0, last_print_throttle_info_ts_:0, last_update_limit_ts_:0, decay_factor_:"2.590156323699679671e-08"}}) [2024-09-13 13:02:17.999121] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=1.623601537298387120e+04, this=0x2b07c337e040, enable_adaptive_limit_=false, Unit Name=Mds, Config Specify Resource Limit(MB)=0, Resource Limit(MB)=307, Throttle Trigger(MB)=184, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=4.144004296208466418e-07) [2024-09-13 13:02:17.999138] INFO [SHARE] init (ob_throttle_unit.ipp:61) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] [Throttle]Init throttle config finish(tenant_id_=1, unit_name_=Mds, resource_limit_=322122547, config_specify_resource_limit_=322122547, throttle_trigger_percentage_=60, throttle_max_duration_=7200000000) [2024-09-13 13:02:17.999144] INFO [SHARE] init_one_ (ob_share_resource_throttle_tool.ipp:85) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] init one allocator for throttle finish(ret=0, ret="OB_SUCCESS", mtt={allocator_:0x2b07c33a1670, module_throttle_unit_:{unit_name_:Mds, is_inited_:true, enable_adaptive_limit_:false, config_specify_resource_limit_:322122547, resource_limit_:322122547, sequence_num_:0, clock_:0, pre_clock_:0, throttle_trigger_percentage_:60, throttle_max_duration_:7200000000, last_advance_clock_ts_us_:0, last_print_throttle_info_ts_:0, last_update_limit_ts_:0, decay_factor_:"4.144004296208466418e-07"}}) [2024-09-13 13:02:17.999169] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=3.071999998092651367e+02, this=0x2b07c3393208, enable_adaptive_limit_=false, Unit Name=TxShare, Config Specify Resource Limit(MB)=0, Resource Limit(MB)=1536, Throttle Trigger(MB)=921, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=3.212808046489710634e+00) [2024-09-13 13:02:17.999183] INFO [SHARE] init (ob_throttle_unit.ipp:61) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] [Throttle]Init throttle config finish(tenant_id_=1, unit_name_=TxShare, resource_limit_=1610612736, config_specify_resource_limit_=1610612736, throttle_trigger_percentage_=60, throttle_max_duration_=7200000000) [2024-09-13 13:02:17.999193] INFO [SHARE] init (ob_share_resource_throttle_tool.ipp:65) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init share resource throttle tool finish(ret=0, ret="OB_SUCCESS", this=0x2b07c337e038) [2024-09-13 13:02:17.999202] INFO [SHARE] init (ob_shared_memory_allocator_mgr.h:53) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl share mem allocator mgr(tenant_id_=1, this=0x2b07c337e030) [2024-09-13 13:02:17.999213] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish init mtl10(cost_time_us=245, type="PN9oceanbase5share19ObSharedMemAllocMgrE") [2024-09-13 13:02:17.999237] INFO [STORAGE.TRANS] init (ob_trans_rpc.cpp:220) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] transaction rpc inited success [2024-09-13 13:02:17.999251] INFO [STORAGE.TRANS] init (ob_location_adapter.cpp:46) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] ob location cache adapter inited success [2024-09-13 13:02:17.999263] INFO [STORAGE.TRANS] alloc (ob_trans_factory.cpp:267) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] trans factory statistics(object_name="ObGtiRpcProxy", label="ObModIds::OB_GTI_RPC_PROXY", alloc_count=0, release_count=0, used=0) [2024-09-13 13:02:17.999288] INFO [STORAGE.TRANS] alloc (ob_trans_factory.cpp:268) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] trans factory statistics(object_name="ObGtiRequestRpc", label="ObModIds::OB_GTI_REQUEST_RPC", alloc_count=0, release_count=0, used=0) [2024-09-13 13:02:17.999321] INFO [STORAGE.TRANS] init (ob_gti_rpc.cpp:61) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=28] gti request rpc inited success(this=0x2b07b9dff0f0, self="172.16.51.35:2882") [2024-09-13 13:02:17.999334] INFO [STORAGE.TRANS] init (ob_gti_source.cpp:49) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] gti source init success(server="172.16.51.35:2882", this=0x2b07c3a056b8) [2024-09-13 13:02:17.999823] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] TimeWheelBase inited success(precision=100000, start_ticket=17262037379, scan_ticket=17262037379) [2024-09-13 13:02:18.000218] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] TimeWheelBase inited success(precision=100000, start_ticket=17262037380, scan_ticket=17262037380) [2024-09-13 13:02:18.000229] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObTimeWheel init success(precision=100000, real_thread_num=2) [2024-09-13 13:02:18.000239] INFO [STORAGE.TRANS] init (ob_trans_timer.cpp:188) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] transaction timer inited success [2024-09-13 13:02:18.000674] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] TimeWheelBase inited success(precision=3000000, start_ticket=575401246, scan_ticket=575401246) [2024-09-13 13:02:18.000686] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObTimeWheel init success(precision=3000000, real_thread_num=1) [2024-09-13 13:02:18.000693] INFO [STORAGE.TRANS] init (ob_trans_timer.cpp:336) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] dup table lease timer inited success [2024-09-13 13:02:18.001769] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20109][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=987842478080) [2024-09-13 13:02:18.001869] INFO register_pm (ob_page_manager.cpp:40) [20109][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07c6352340, pm.get_tid()=20109, tenant_id=500) [2024-09-13 13:02:18.001901] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20109][][T1][Y0-0000000000000000-0-0] [lt=23] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=3) [2024-09-13 13:02:18.002065] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20110][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=992137445376) [2024-09-13 13:02:18.002149] INFO register_pm (ob_page_manager.cpp:40) [20110][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07c63d0340, pm.get_tid()=20110, tenant_id=500) [2024-09-13 13:02:18.002170] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20110][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=4) [2024-09-13 13:02:18.002171] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] simple thread pool init success(name=TransService, thread_num=2, task_num_limit=150000) [2024-09-13 13:02:18.002204] INFO [STORAGE.TRANS] init (ob_trans_define_v4.cpp:1518) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] txDescMgr.init(ret=0, inited_=true, stoped_=true, active_cnt=0) [2024-09-13 13:02:18.002285] INFO [STORAGE.TRANS] init (ob_trans_ctx_mgr_v4.cpp:1719) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] ObTxCtxMgr inited success(*this={is_inited_:true, tenant_id_:1, this:0x2b07c3a041b0}, txs=0x2b07c3a04030) [2024-09-13 13:02:18.002330] INFO [STORAGE.DUP_TABLE] init (ob_dup_table_util.cpp:1734) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=21] init ObDupTableLoopWorker [2024-09-13 13:02:18.002409] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D9A-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.002572] INFO [STORAGE.TRANS] init (ob_tablet_to_ls_cache.cpp:33) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTabletToLSCache init success(ret=0, ret="OB_SUCCESS", tenant_id=1, this={is_inited:true, tx_ctx_mgr:0x2b07c3a041b0, this:0x2b07c3a0d1f0}) [2024-09-13 13:02:18.002591] INFO [STORAGE.TRANS] init (ob_trans_service.cpp:178) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] transaction service inited success(this={is_inited_:true, tenant_id_:1, this:0x2b07c3a04030}, tenant_memory_limit=3221225472, tablet_to_ls_cache={is_inited:true, tx_ctx_mgr:0x2b07c3a041b0, this:0x2b07c3a0d1f0}) [2024-09-13 13:02:18.002611] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] finish init mtl11(cost_time_us=3388, type="PN9oceanbase11transaction14ObTransServiceE") [2024-09-13 13:02:18.002633] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init point thread id with(&point=0x55a3873c7dc0, idx_=3493, point=[thread id=19877, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=19877) [2024-09-13 13:02:18.002912] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D9A-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.003103] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20111][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=996432412672) [2024-09-13 13:02:18.003190] INFO register_pm (ob_page_manager.cpp:40) [20111][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07c6c56340, pm.get_tid()=20111, tenant_id=500) [2024-09-13 13:02:18.003211] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20111][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=5) [2024-09-13 13:02:18.003218] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4] thread is running function [2024-09-13 13:02:18.003212] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=25] init thread success(this=0x2b07baf6c190, id=5, ret=0) [2024-09-13 13:02:18.003262] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0xa8729f8 0x11a706ed 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.003748] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] TimeWheelBase inited success(precision=100000, start_ticket=17262037380, scan_ticket=17262037380) [2024-09-13 13:02:18.003767] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] ObTimeWheel init success(precision=100000, real_thread_num=1) [2024-09-13 13:02:18.003948] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20112][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1000727379968) [2024-09-13 13:02:18.004034] INFO register_pm (ob_page_manager.cpp:40) [20112][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c6cd4340, pm.get_tid()=20112, tenant_id=500) [2024-09-13 13:02:18.004060] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20112][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=6) [2024-09-13 13:02:18.004060] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObTimeWheel start success(timer_name="CoordTR") [2024-09-13 13:02:18.004071] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] init ObOccamTimer success(ret=0) [2024-09-13 13:02:18.004480] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20113][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1005022347264) [2024-09-13 13:02:18.004560] INFO register_pm (ob_page_manager.cpp:40) [20113][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07c6d52340, pm.get_tid()=20113, tenant_id=500) [2024-09-13 13:02:18.004583] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20113][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=7) [2024-09-13 13:02:18.004588] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20113][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] thread is running function [2024-09-13 13:02:18.004584] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init thread success(this=0x2b07baf6c2f0, id=6, ret=0) [2024-09-13 13:02:18.004613] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0xa872a37 0x11a706ed 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.005066] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] TimeWheelBase inited success(precision=100000, start_ticket=17262037380, scan_ticket=17262037380) [2024-09-13 13:02:18.005082] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] ObTimeWheel init success(precision=100000, real_thread_num=1) [2024-09-13 13:02:18.005258] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20114][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1009317314560) [2024-09-13 13:02:18.005337] INFO register_pm (ob_page_manager.cpp:40) [20114][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07c6dd0340, pm.get_tid()=20114, tenant_id=500) [2024-09-13 13:02:18.005354] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20114][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=8) [2024-09-13 13:02:18.005355] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTimeWheel start success(timer_name="CoordTF") [2024-09-13 13:02:18.005365] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] init ObOccamTimer success(ret=0) [2024-09-13 13:02:18.005378] INFO [COORDINATOR] mtl_init (ob_leader_coordinator.cpp:87) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObLeaderCoordinator mtl init success(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.005391] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl12(cost_time_us=2771, type="PN9oceanbase10logservice11coordinator19ObLeaderCoordinatorE") [2024-09-13 13:02:18.005402] INFO [COORDINATOR] mtl_init (ob_failure_detector.cpp:71) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObFailureDetector mtl init [2024-09-13 13:02:18.005417] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl13(cost_time_us=9, type="PN9oceanbase10logservice11coordinator17ObFailureDetectorE") [2024-09-13 13:02:18.005477] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005518] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] ObSliceAlloc init finished(bsize_=65408, isize_=288, slice_limit_=65008, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005548] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=7936, isize_=64, slice_limit_=7536, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005575] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=65408, isize_=160, slice_limit_=65008, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005601] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=7936, isize_=64, slice_limit_=7536, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005628] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=7936, isize_=96, slice_limit_=7536, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005655] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObSliceAlloc init finished(bsize_=7936, isize_=64, slice_limit_=7536, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005683] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObSliceAlloc init finished(bsize_=7936, isize_=56, slice_limit_=7536, tmallocator_=0x2b07c09b6030) [2024-09-13 13:02:18.005696] INFO construct_allocator_ (ob_tenant_mutil_allocator_mgr.cpp:169) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTenantMutilAllocator init success(tenant_id=1) [2024-09-13 13:02:18.005776] INFO [CLOG] check_and_prepare_dir (ob_log_service.cpp:231) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] check_and_prepare_dir success(ret=0, dir="/data1/oceanbase/data/clog/tenant_1") [2024-09-13 13:02:18.005952] INFO [PALF] init_log_io_worker_config_ (palf_env_impl.cpp:1388) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] init_log_io_worker_config_ success(config={io_worker_num:1, io_queue_capcity:102400, batch_width:8, batch_depth:2048}, tenant_id=1, log_writer_parallelism=3) [2024-09-13 13:02:18.005979] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] create tg succeed(tg_id=302, tg=0x2b07c09a7c80, thread_cnt=1, tg->attr_={name:FetchLog, type:4}, tg=0x2b07c09a7c80) [2024-09-13 13:02:18.006007] INFO [PALF] init (log_rpc.cpp:56) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] LogRpc init success(tenant_id=1, self="172.16.51.35:2882") [2024-09-13 13:02:18.006025] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] create tg succeed(tg_id=303, tg=0x2b07c09a90f0, thread_cnt=1, tg->attr_={name:LogIOCb, type:4}, tg=0x2b07c09a90f0) [2024-09-13 13:02:18.006039] INFO [PALF] init (log_io_task_cb_thread_pool.cpp:53) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] LogIOTaskCbThreadPool init success(ret=0, tg_id_=303, palf_env_impl_=0x2b07c59f8030, palf_env_impl=0x2b07c59f8030, log_io_cb_num=262144) [2024-09-13 13:02:18.006734] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] BatchLogIOFlushLogTask init success(ret=0, i=0, io_task=0x2b07c59fe930) [2024-09-13 13:02:18.006778] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=20] BatchLogIOFlushLogTask init success(ret=0, i=1, io_task=0x2b07c59fead0) [2024-09-13 13:02:18.006814] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] BatchLogIOFlushLogTask init success(ret=0, i=2, io_task=0x2b07c59fec70) [2024-09-13 13:02:18.006848] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] BatchLogIOFlushLogTask init success(ret=0, i=3, io_task=0x2b07c59fee10) [2024-09-13 13:02:18.006893] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] BatchLogIOFlushLogTask init success(ret=0, i=4, io_task=0x2b07c59fefb0) [2024-09-13 13:02:18.006927] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] BatchLogIOFlushLogTask init success(ret=0, i=5, io_task=0x2b07c59ff150) [2024-09-13 13:02:18.006963] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] BatchLogIOFlushLogTask init success(ret=0, i=6, io_task=0x2b07c59ff2f0) [2024-09-13 13:02:18.006994] INFO [PALF] init (log_io_worker.cpp:379) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] BatchLogIOFlushLogTask init success(ret=0, i=7, io_task=0x2b07c59ff490) [2024-09-13 13:02:18.007006] INFO [PALF] init (log_io_worker.cpp:99) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] LogIOWorker init success(ret=0, config={io_worker_num:1, io_queue_capcity:102400, batch_width:8, batch_depth:2048}, cb_thread_pool_tg_id=303, palf_env_impl={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"0.0.0.0:0", log_dir:"", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):-1, log_disk_utilization_limit_threshold(%):-1, log_disk_throttling_percentage(%):-1, log_disk_throttling_maximum_duration(s):0, log_writer_parallelism:-1}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):-1, log_disk_utilization_limit_threshold(%):-1, log_disk_throttling_percentage(%):-1, log_disk_throttling_maximum_duration(s):0, log_writer_parallelism:-1}, status:0, cur_unrecyclable_log_disk_size(MB):0, sequence:-1}, log_alloc_mgr_:NULL}) [2024-09-13 13:02:18.007043] INFO [PALF] create_and_init_log_io_workers_ (log_io_worker_wrapper.cpp:168) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=34] init LogIOWorker success(i=0, config={io_worker_num:1, io_queue_capcity:102400, batch_width:8, batch_depth:2048}, tenant_id=1, cb_thread_pool_tg_id=303, allocator=0x2b07c09b6030, palf_env_impl=0x2b07c59f8030, iow=0x2b07c59fe030, log_io_workers_=0x2b07c59fe030) [2024-09-13 13:02:18.007060] INFO [PALF] init (log_io_worker_wrapper.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] success to init LogIOWorkerWrapper(config={io_worker_num:1, io_queue_capcity:102400, batch_width:8, batch_depth:2048}, tenant_id=1, this={is_inited:true, is_user_tenant:false, log_writer_parallelism:1, log_io_workers_:0x2b07c59fe030, round_robin_idx:0}) [2024-09-13 13:02:18.007082] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] create tg succeed(tg_id=304, tg=0x2b07c09a9370, thread_cnt=1, tg->attr_={name:LogSharedQueueThread, type:4}, tg=0x2b07c09a9370) [2024-09-13 13:02:18.007094] INFO [PALF] init (log_shared_queue_thread.cpp:50) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] LogSharedQueueTh init success(ret=0, tg_id_=304, palf_env_impl=0x2b07c59f8030) [2024-09-13 13:02:18.007107] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] create tg succeed(tg_id=305, tg=0x2b07c09a95f0, thread_cnt=1, tg->attr_={name:PalfGC, type:3}, tg=0x2b07c09a95f0) [2024-09-13 13:02:18.007121] INFO [PALF] init (block_gc_timer_task.cpp:37) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] BlockGCTimerTask init success(palf_env_impl={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"0.0.0.0:0", log_dir:"", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):-1, log_disk_utilization_limit_threshold(%):-1, log_disk_throttling_percentage(%):-1, log_disk_throttling_maximum_duration(s):0, log_writer_parallelism:-1}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):-1, log_disk_utilization_limit_threshold(%):-1, log_disk_throttling_percentage(%):-1, log_disk_throttling_maximum_duration(s):0, log_writer_parallelism:-1}, status:0, cur_unrecyclable_log_disk_size(MB):0, sequence:-1}, log_alloc_mgr_:NULL}, tg_id_=305, tg_id=92) [2024-09-13 13:02:18.007145] INFO [PALF] init (log_loop_thread.cpp:54) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] LogLoopThread init finished(ret=0) [2024-09-13 13:02:18.007582] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20115][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1013612281856) [2024-09-13 13:02:18.007684] INFO register_pm (ob_page_manager.cpp:40) [20115][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07c6e56340, pm.get_tid()=20115, tenant_id=500) [2024-09-13 13:02:18.007710] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20115][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=9) [2024-09-13 13:02:18.007709] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init thread success(this=0x2b07baf6c450, id=7, ret=0) [2024-09-13 13:02:18.007722] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20115][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] thread is running function [2024-09-13 13:02:18.007767] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0x820d17b 0x83deac8 0x11a707fb 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.008253] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] TimeWheelBase inited success(precision=10000, start_ticket=172620373800, scan_ticket=172620373800) [2024-09-13 13:02:18.008276] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] ObTimeWheel init success(precision=10000, real_thread_num=1) [2024-09-13 13:02:18.008483] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20116][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1017907249152) [2024-09-13 13:02:18.008531] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D9B-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.008577] INFO register_pm (ob_page_manager.cpp:40) [20116][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07c6ed4340, pm.get_tid()=20116, tenant_id=500) [2024-09-13 13:02:18.008601] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20116][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=10) [2024-09-13 13:02:18.008601] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTimeWheel start success(timer_name="ElectTimer") [2024-09-13 13:02:18.008611] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init ObOccamTimer success(ret=0) [2024-09-13 13:02:18.008663] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] create tg succeed(tg_id=306, tg=0x2b07c09ad440, thread_cnt=1, tg->attr_={name:LogUpdater, type:3}, tg=0x2b07c09ad440) [2024-09-13 13:02:18.008680] INFO [PALF] init (log_updater.cpp:41) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] LogUpdater init success(palf_env_impl={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"0.0.0.0:0", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}, log_alloc_mgr_:NULL}, tg_id_=306, tg_id=111) [2024-09-13 13:02:18.008706] INFO [PALF] init (palf_env_impl.cpp:280) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=25] PalfEnvImpl init success(ret=0, self_="172.16.51.35:2882", this={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:18.008751] INFO [PALF] scan_all_palf_handle_impl_director_ (palf_env_impl.cpp:533) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] scan_all_palf_handle_impl_director_ success(ret=0, log_dir_="/data1/oceanbase/data/clog/tenant_1", guard=time guard 'PalfEnvImplStart' cost too much time, used=28, time_dist: scan_dir=21) [2024-09-13 13:02:18.008770] WDIAG [LIB] ~ObTimeGuard (utility.h:890) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18][errcode=-4389] destruct(*this=time guard 'PalfEnvImplStart' cost too much time, used=47, time_dist: scan_dir=21) [2024-09-13 13:02:18.009105] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D9B-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.009815] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D9C-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.010177] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D9C-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.010447] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20117][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1022202216448) [2024-09-13 13:02:18.010535] INFO register_pm (ob_page_manager.cpp:40) [20117][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07c6f52340, pm.get_tid()=20117, tenant_id=500) [2024-09-13 13:02:18.010559] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20117][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=11) [2024-09-13 13:02:18.010560] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] simple thread pool init success(name=LogIOCb, thread_num=1, task_num_limit=262144) [2024-09-13 13:02:18.010577] INFO start (log_io_task_cb_thread_pool.cpp:67) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] start tg(tg_id_=303, tg_name=LogIOCb) [2024-09-13 13:02:18.010590] INFO [PALF] start (log_io_task_cb_thread_pool.cpp:71) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start LogIOTaskCbThreadPool success(ret=0, tg_id_=303) [2024-09-13 13:02:18.010759] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20118][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1026497183744) [2024-09-13 13:02:18.010828] INFO register_pm (ob_page_manager.cpp:40) [20118][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07c6fd0340, pm.get_tid()=20118, tenant_id=500) [2024-09-13 13:02:18.010845] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20118][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=12) [2024-09-13 13:02:18.010845] INFO [PALF] start (log_io_worker_wrapper.cpp:94) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] success to start LogIOWorkerWrapper(this={is_inited:true, is_user_tenant:false, log_writer_parallelism:1, log_io_workers_:0x2b07c59fe030, round_robin_idx:0}) [2024-09-13 13:02:18.011086] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20119][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1030792151040) [2024-09-13 13:02:18.011144] INFO register_pm (ob_page_manager.cpp:40) [20119][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07c7856340, pm.get_tid()=20119, tenant_id=500) [2024-09-13 13:02:18.011159] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20119][][T1][Y0-0000000000000000-0-0] [lt=10] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=13) [2024-09-13 13:02:18.011160] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] simple thread pool init success(name=LogSharedQueueThread, thread_num=1, task_num_limit=900) [2024-09-13 13:02:18.011172] INFO start (log_shared_queue_thread.cpp:64) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=304, tg_name=LogSharedQueueThread) [2024-09-13 13:02:18.011184] INFO [PALF] start (log_shared_queue_thread.cpp:68) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start LogSharedQueueTh success(ret=0, tg_id_=304) [2024-09-13 13:02:18.011191] INFO start (block_gc_timer_task.cpp:48) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=305, tg_name=PalfGC) [2024-09-13 13:02:18.011371] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20120][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1035087118336) [2024-09-13 13:02:18.011475] INFO register_pm (ob_page_manager.cpp:40) [20120][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07c78d4340, pm.get_tid()=20120, tenant_id=500) [2024-09-13 13:02:18.011496] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20120][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=14) [2024-09-13 13:02:18.011526] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTimer create success(this=0x2b07c09a9610, thread_id=20120, lbt()=0x24edc06b 0x13836960 0x115a4182 0x8080439 0x820c772 0x83deac8 0x11a707fb 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.011542] INFO [PALF] start (block_gc_timer_task.cpp:53) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] BlockGCTimerTask start success(tg_id_=305, palf_env_impl_={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:18.011691] INFO run1 (ob_timer.cpp:361) [20120][][T1][Y0-0000000000000000-0-0] [lt=5] timer thread started(this=0x2b07c09a9610, tid=20120, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.012515] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20121][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1039382085632) [2024-09-13 13:02:18.012655] INFO register_pm (ob_page_manager.cpp:40) [20121][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07c7952340, pm.get_tid()=20121, tenant_id=500) [2024-09-13 13:02:18.012678] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20121][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=15) [2024-09-13 13:02:18.012678] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=30] simple thread pool init success(name=FetchLog, thread_num=1, task_num_limit=65536) [2024-09-13 13:02:18.012698] INFO start (fetch_log_engine.cpp:148) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] start tg(tg_id_=302, tg_name=FetchLog) [2024-09-13 13:02:18.012709] INFO [PALF] start (fetch_log_engine.cpp:151) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start FetchLogEngine success(ret=0, tg_id_=302) [2024-09-13 13:02:18.013030] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20122][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1043677052928) [2024-09-13 13:02:18.013130] INFO register_pm (ob_page_manager.cpp:40) [20122][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c79d0340, pm.get_tid()=20122, tenant_id=500) [2024-09-13 13:02:18.013153] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20122][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=16) [2024-09-13 13:02:18.013153] INFO start (log_updater.cpp:51) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=306, tg_name=LogUpdater) [2024-09-13 13:02:18.013494] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20123][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1047972020224) [2024-09-13 13:02:18.013600] INFO register_pm (ob_page_manager.cpp:40) [20123][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07c7a56340, pm.get_tid()=20123, tenant_id=500) [2024-09-13 13:02:18.013630] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20123][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=17) [2024-09-13 13:02:18.013669] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTimer create success(this=0x2b07c09ad460, thread_id=20123, lbt()=0x24edc06b 0x13836960 0x115a4182 0x820c86f 0x83deac8 0x11a707fb 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.013686] INFO [PALF] start (log_updater.cpp:56) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] LogUpdater start success(tg_id_=306, palf_env_impl_={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:18.013714] INFO [PALF] start (palf_env_impl.cpp:311) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=25] PalfEnv start success(ret=0) [2024-09-13 13:02:18.013725] INFO [PALF] create_palf_env (palf_env.cpp:65) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] create_palf_handle_impl success(base_dir="/data1/oceanbase/data/clog/tenant_1") [2024-09-13 13:02:18.013747] INFO [CLOG] init (ob_ls_adapter.cpp:46) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] ObLSAdapter init success(ret=0, ls_service_=0x2b07a0df0030) [2024-09-13 13:02:18.013779] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=25] create tg succeed(tg_id=307, tg=0x2b07c09ad630, thread_cnt=1, tg->attr_={name:ApplySrv, type:4}, tg=0x2b07c09ad630) [2024-09-13 13:02:18.013837] INFO [CLOG] init (ob_log_apply_service.cpp:1071) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] ObLogApplyService init success(is_inited_=true) [2024-09-13 13:02:18.013853] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] create tg succeed(tg_id=308, tg=0x2b07c09ad8b0, thread_cnt=1, tg->attr_={name:ReplaySrv, type:4}, tg=0x2b07c09ad8b0) [2024-09-13 13:02:18.013919] INFO run1 (ob_timer.cpp:361) [20123][][T1][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07c09ad460, tid=20123, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.013937] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] create tg succeed(tg_id=309, tg=0x2b07bf1dfde0, thread_cnt=1, tg->attr_={name:ReplayProcessStat, type:3}, tg=0x2b07bf1dfde0) [2024-09-13 13:02:18.013951] INFO [CLOG] init (ob_log_replay_service.cpp:68) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] ReplayProcessStat init success(rp_sv_=0x2b07c24c8470, tg_id_=309, tg_id=101) [2024-09-13 13:02:18.013966] INFO [CLOG] init (ob_log_replay_service.cpp:220) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] replay service init success(tg_id_=308) [2024-09-13 13:02:18.013976] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=310, tg=0x2b07c09adb30, thread_cnt=1, tg->attr_={name:RCSrv, type:4}, tg=0x2b07c09adb30) [2024-09-13 13:02:18.013987] INFO [CLOG] init (ob_role_change_service.cpp:156) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObRoleChangeService init success(ret=0, tg_id_=310, ls_service=0x2b07a0df0030, apply_service=0x2b07c24c8070, replay_service=0x2b07c24c8480) [2024-09-13 13:02:18.013999] INFO [PALF] init (ob_location_adapter.cpp:46) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObLocationAdapter init success(ret=0, location_service_=0x55a386aebcc0) [2024-09-13 13:02:18.014012] INFO [PALF] init (ob_reporter_adapter.cpp:44) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObLogReporterAdapter init success(ret=0, rs_reporter_=0x55a386e0e8c0) [2024-09-13 13:02:18.014026] INFO [ARCHIVE] init (large_buffer_pool.cpp:53) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] LargeBufferPool init succ(this={inited:true, total_limit:1073741824, label:"CDCService", array:[]}) [2024-09-13 13:02:18.014357] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20124][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1052266987520) [2024-09-13 13:02:18.014480] INFO register_pm (ob_page_manager.cpp:40) [20124][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07c7ad4340, pm.get_tid()=20124, tenant_id=500) [2024-09-13 13:02:18.014511] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20124][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=18) [2024-09-13 13:02:18.014511] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=21] simple thread pool init success(name=ObLogEXTTP, thread_num=1, task_num_limit=64) [2024-09-13 13:02:18.014533] INFO [CLOG] init (ob_log_external_storage_handler.cpp:65) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] ObLogExternalStorageHandler inits successfully(this={concurrency:1, capacity:64, is_running:false, is_inited:true, handle_adapter_:0x2b07c59abe70, this:0x2b07c24cb670}) [2024-09-13 13:02:18.014553] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] create tg succeed(tg_id=311, tg=0x2b07c09addb0, thread_cnt=1, tg->attr_={name:CDCSrv, type:2}, tg=0x2b07c09addb0) [2024-09-13 13:02:18.014607] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=29] ObSliceAlloc init finished(bsize_=16777216, isize_=4992, slice_limit_=16776816, tmallocator_=NULL) [2024-09-13 13:02:18.014631] INFO [CLOG] init (ob_log_restore_net_driver.cpp:92) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] ObLogRestoreNetDriver init succ [2024-09-13 13:02:18.015050] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20125][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1056561954816) [2024-09-13 13:02:18.015163] INFO register_pm (ob_page_manager.cpp:40) [20125][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07c7b52340, pm.get_tid()=20125, tenant_id=500) [2024-09-13 13:02:18.015188] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20125][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=19) [2024-09-13 13:02:18.015188] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] simple thread pool init success(name=ObLogEXTTP, thread_num=1, task_num_limit=64) [2024-09-13 13:02:18.015199] INFO [CLOG] init (ob_log_external_storage_handler.cpp:65) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObLogExternalStorageHandler inits successfully(this={concurrency:1, capacity:64, is_running:false, is_inited:true, handle_adapter_:0x2b07c67fda70, this:0x2b07c25e5cb0}) [2024-09-13 13:02:18.015213] INFO [ARCHIVE] init (large_buffer_pool.cpp:53) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] LargeBufferPool init succ(this={inited:true, total_limit:1073741824, label:"IterBuf", array:[]}) [2024-09-13 13:02:18.015226] INFO [CLOG] init (ob_log_restore_scheduler.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObLogRestoreScheduler init succ(tenant_id_=1) [2024-09-13 13:02:18.015241] INFO [CLOG] init (ob_log_restore_service.cpp:91) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObLogRestoreService init succ [2024-09-13 13:02:18.015255] INFO [PALF] init (ob_locality_adapter.cpp:43) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObLocalityAdapter init success(locality_manager_=0x55a386e11900) [2024-09-13 13:02:18.015268] INFO [CLOG] init (ob_log_service.cpp:304) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObLogService init success(ret=0, base_dir="/data1/oceanbase/data/clog/tenant_1", self="172.16.51.35:2882", transport=0x2b07a0919ed0, batch_rpc=0x55a386aba800, ls_service=0x2b07a0df0030, tenant_id=1) [2024-09-13 13:02:18.015461] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D9D-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.016349] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D9D-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.018000] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D9E-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.018385] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D9E-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.021870] INFO [CLOG] mtl_init (ob_log_service.cpp:118) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] ObLogService mtl_init success [2024-09-13 13:02:18.021907] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=35] finish init mtl14(cost_time_us=16475, type="PN9oceanbase10logservice12ObLogServiceE") [2024-09-13 13:02:18.022050] INFO [CLOG] init (ob_garbage_collector.cpp:1294) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=27] ObGarbageCollector is inited(ret=0, self_addr_="172.16.51.35:2882") [2024-09-13 13:02:18.022064] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] finish init mtl15(cost_time_us=130, type="PN9oceanbase10logservice18ObGarbageCollectorE") [2024-09-13 13:02:18.022457] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl16(cost_time_us=385, type="PN9oceanbase7storage11ObLSServiceE") [2024-09-13 13:02:18.022486] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=20] create tg succeed(tg_id=312, tg=0x2b07c09af0f0, thread_cnt=1, tg->attr_={name:WriteCkpt, type:3}, tg=0x2b07c09af0f0) [2024-09-13 13:02:18.022511] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish init mtl17(cost_time_us=34, type="PN9oceanbase7storage29ObTenantCheckpointSlogHandlerE") [2024-09-13 13:02:18.022537] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl18(cost_time_us=17, type="PN9oceanbase10compaction29ObTenantCompactionProgressMgrE") [2024-09-13 13:02:18.022666] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl19(cost_time_us=113, type="PN9oceanbase10compaction30ObServerCompactionEventHistoryE") [2024-09-13 13:02:18.023427] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=20] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14289635738, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:18.026471] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143D9F-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.026965] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143D9F-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.030730] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] create tg succeed(tg_id=313, tg=0x2b07c09af2e0, thread_cnt=1, tg->attr_={name:TabletStatRpt, type:3}, tg=0x2b07c09af2e0) [2024-09-13 13:02:18.030753] INFO init (ob_tenant_tablet_stat_mgr.cpp:573) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] start tg(report_tg_id_=313, tg_name=TabletStatRpt) [2024-09-13 13:02:18.031057] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20126][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1060856922112) [2024-09-13 13:02:18.031202] INFO register_pm (ob_page_manager.cpp:40) [20126][][T0][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07c7bd0340, pm.get_tid()=20126, tenant_id=500) [2024-09-13 13:02:18.031243] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20126][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=20) [2024-09-13 13:02:18.031270] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ObTimer create success(this=0x2b07c09af300, thread_id=20126, lbt()=0x24edc06b 0x13836960 0x115a4182 0xfc997db 0x11a70b34 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.031288] INFO [STORAGE] mtl_init (ob_tenant_tablet_stat_mgr.cpp:592) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] success to init ObTenantTabletStatMgr(MTL_ID()=1) [2024-09-13 13:02:18.031304] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] finish init mtl20(cost_time_us=8622, type="PN9oceanbase7storage21ObTenantTabletStatMgrE") [2024-09-13 13:02:18.031322] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.031382] INFO [STORAGE.TRANS] init (ob_lock_wait_mgr.cpp:125) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] LockWaitMgr.init(ret=0) [2024-09-13 13:02:18.031394] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl21(cost_time_us=81, type="PN9oceanbase8memtable13ObLockWaitMgrE") [2024-09-13 13:02:18.031405] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl22(cost_time_us=7, type="PN9oceanbase11transaction9tablelock18ObTableLockServiceE") [2024-09-13 13:02:18.031419] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl23(cost_time_us=7, type="PN9oceanbase10rootserver27ObPrimaryMajorFreezeServiceE") [2024-09-13 13:02:18.031426] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl24(cost_time_us=1, type="PN9oceanbase10rootserver27ObRestoreMajorFreezeServiceE") [2024-09-13 13:02:18.031443] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=314, tg=0x2b07c09af4d0, thread_cnt=1, tg->attr_={name:LSMetaCh, type:3}, tg=0x2b07c09af4d0) [2024-09-13 13:02:18.031453] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=315, tg=0x2b07c09af6c0, thread_cnt=1, tg->attr_={name:TbMetaCh, type:3}, tg=0x2b07c09af6c0) [2024-09-13 13:02:18.031461] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl25(cost_time_us=32, type="PN9oceanbase8observer19ObTenantMetaCheckerE") [2024-09-13 13:02:18.031471] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl26(cost_time_us=0, type="PN9oceanbase8observer11QueueThreadE") [2024-09-13 13:02:18.031474] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish init mtl27(cost_time_us=0, type="PN9oceanbase7storage25ObStorageHAHandlerServiceE") [2024-09-13 13:02:18.031485] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish init mtl28(cost_time_us=8, type="PN9oceanbase10rootserver18ObTenantInfoLoaderE") [2024-09-13 13:02:18.031502] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] create tg succeed(tg_id=316, tg=0x2b07c09af8b0, thread_cnt=1, tg->attr_={name:ObCreateStandbyFromNetActor, type:1}, tg=0x2b07c09af8b0) [2024-09-13 13:02:18.031512] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=316, tg_name=ObCreateStandbyFromNetActor) [2024-09-13 13:02:18.031638] INFO run1 (ob_timer.cpp:361) [20126][][T1][Y0-0000000000000000-0-0] [lt=12] timer thread started(this=0x2b07c09af300, tid=20126, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.031783] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20127][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1065151889408) [2024-09-13 13:02:18.031939] INFO register_pm (ob_page_manager.cpp:40) [20127][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c8a56340, pm.get_tid()=20127, tenant_id=500) [2024-09-13 13:02:18.031978] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20127][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=21) [2024-09-13 13:02:18.031992] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl29(cost_time_us=496, type="PN9oceanbase10rootserver27ObCreateStandbyFromNetActorE") [2024-09-13 13:02:18.031991] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20127][][T1][Y0-0000000000000000-0-0] [lt=11] new reentrant thread created(idx=0) [2024-09-13 13:02:18.032000] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=317, tg=0x2b07c09afa70, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09afa70) [2024-09-13 13:02:18.032010] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=317, tg_name=SimpleLSService) [2024-09-13 13:02:18.032241] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20128][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1069446856704) [2024-09-13 13:02:18.032373] INFO register_pm (ob_page_manager.cpp:40) [20128][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07c8ad4340, pm.get_tid()=20128, tenant_id=500) [2024-09-13 13:02:18.032411] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20128][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=22) [2024-09-13 13:02:18.032419] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl30(cost_time_us=423, type="PN9oceanbase10rootserver29ObStandbySchemaRefreshTriggerE") [2024-09-13 13:02:18.032419] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20128][][T1][Y0-0000000000000000-0-0] [lt=7] new reentrant thread created(idx=0) [2024-09-13 13:02:18.032426] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl31(cost_time_us=3, type="PN9oceanbase10rootserver20ObLSRecoveryReportorE") [2024-09-13 13:02:18.032443] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=318, tg=0x2b07c09afc30, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09afc30) [2024-09-13 13:02:18.032448] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=318, tg_name=SimpleLSService) [2024-09-13 13:02:18.032676] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20129][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1073741824000) [2024-09-13 13:02:18.032787] INFO register_pm (ob_page_manager.cpp:40) [20129][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07c8b52340, pm.get_tid()=20129, tenant_id=500) [2024-09-13 13:02:18.032823] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20129][][T1][Y0-0000000000000000-0-0] [lt=25] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=23) [2024-09-13 13:02:18.032829] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20129][][T1][Y0-0000000000000000-0-0] [lt=5] new reentrant thread created(idx=0) [2024-09-13 13:02:18.032833] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl32(cost_time_us=398, type="PN9oceanbase10rootserver17ObCommonLSServiceE") [2024-09-13 13:02:18.032842] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=319, tg=0x2b07c09afdf0, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09afdf0) [2024-09-13 13:02:18.032849] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=319, tg_name=SimpleLSService) [2024-09-13 13:02:18.033121] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20130][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1078036791296) [2024-09-13 13:02:18.033244] INFO register_pm (ob_page_manager.cpp:40) [20130][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07c8bd0340, pm.get_tid()=20130, tenant_id=500) [2024-09-13 13:02:18.033299] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20130][][T1][Y0-0000000000000000-0-0] [lt=21] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=24) [2024-09-13 13:02:18.033307] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish init mtl33(cost_time_us=468, type="PN9oceanbase10rootserver18ObPrimaryLSServiceE") [2024-09-13 13:02:18.033311] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20130][][T1][Y0-0000000000000000-0-0] [lt=11] new reentrant thread created(idx=0) [2024-09-13 13:02:18.033322] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=320, tg=0x2b07c09ed0f0, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09ed0f0) [2024-09-13 13:02:18.033330] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=320, tg_name=SimpleLSService) [2024-09-13 13:02:18.033599] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20131][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1082331758592) [2024-09-13 13:02:18.033714] INFO register_pm (ob_page_manager.cpp:40) [20131][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07c8c56340, pm.get_tid()=20131, tenant_id=500) [2024-09-13 13:02:18.033742] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20131][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=25) [2024-09-13 13:02:18.033752] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl34(cost_time_us=441, type="PN9oceanbase10rootserver27ObBalanceTaskExecuteServiceE") [2024-09-13 13:02:18.033752] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20131][][T1][Y0-0000000000000000-0-0] [lt=9] new reentrant thread created(idx=0) [2024-09-13 13:02:18.033767] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=321, tg=0x2b07c09ed2b0, thread_cnt=2, tg->attr_={name:LSService, type:1}, tg=0x2b07c09ed2b0) [2024-09-13 13:02:18.033781] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] start tg(tg_id_=321, tg_name=LSService) [2024-09-13 13:02:18.034015] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20132][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1086626725888) [2024-09-13 13:02:18.034131] INFO register_pm (ob_page_manager.cpp:40) [20132][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07c8cd4340, pm.get_tid()=20132, tenant_id=500) [2024-09-13 13:02:18.034152] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20132][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=26) [2024-09-13 13:02:18.034158] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20132][][T1][Y0-0000000000000000-0-0] [lt=6] new reentrant thread created(idx=0) [2024-09-13 13:02:18.034374] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20133][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1090921693184) [2024-09-13 13:02:18.034475] INFO register_pm (ob_page_manager.cpp:40) [20133][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07c8d52340, pm.get_tid()=20133, tenant_id=500) [2024-09-13 13:02:18.034499] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20133][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=27) [2024-09-13 13:02:18.034503] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20133][][T1][Y0-0000000000000000-0-0] [lt=4] new reentrant thread created(idx=1) [2024-09-13 13:02:18.034509] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl35(cost_time_us=751, type="PN9oceanbase10rootserver19ObRecoveryLSServiceE") [2024-09-13 13:02:18.034515] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=322, tg=0x2b07c09ed470, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09ed470) [2024-09-13 13:02:18.034520] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=322, tg_name=SimpleLSService) [2024-09-13 13:02:18.034756] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20134][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1095216660480) [2024-09-13 13:02:18.034865] INFO register_pm (ob_page_manager.cpp:40) [20134][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07c8dd0340, pm.get_tid()=20134, tenant_id=500) [2024-09-13 13:02:18.034899] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20134][][T1][Y0-0000000000000000-0-0] [lt=23] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=28) [2024-09-13 13:02:18.034905] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20134][][T1][Y0-0000000000000000-0-0] [lt=6] new reentrant thread created(idx=0) [2024-09-13 13:02:18.034910] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl36(cost_time_us=398, type="PN9oceanbase10rootserver16ObRestoreServiceE") [2024-09-13 13:02:18.034923] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=323, tg=0x2b07c09ed630, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09ed630) [2024-09-13 13:02:18.034931] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=323, tg_name=SimpleLSService) [2024-09-13 13:02:18.035135] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA0-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.035154] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20135][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1099511627776) [2024-09-13 13:02:18.035240] INFO register_pm (ob_page_manager.cpp:40) [20135][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07c8e56340, pm.get_tid()=20135, tenant_id=500) [2024-09-13 13:02:18.035262] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20135][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=29) [2024-09-13 13:02:18.035268] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish init mtl37(cost_time_us=353, type="PN9oceanbase10rootserver22ObTenantBalanceServiceE") [2024-09-13 13:02:18.035269] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20135][][T1][Y0-0000000000000000-0-0] [lt=7] new reentrant thread created(idx=0) [2024-09-13 13:02:18.035279] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=324, tg=0x2b07c09ed7f0, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09ed7f0) [2024-09-13 13:02:18.035287] INFO create (ob_backup_base_service.cpp:59) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=324, tg_name=SimpleLSService) [2024-09-13 13:02:18.035521] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20136][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1103806595072) [2024-09-13 13:02:18.035546] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DA0-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.035638] INFO register_pm (ob_page_manager.cpp:40) [20136][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07c8ed4340, pm.get_tid()=20136, tenant_id=500) [2024-09-13 13:02:18.035667] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20136][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=30) [2024-09-13 13:02:18.035675] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20136][][T1][Y0-0000000000000000-0-0] [lt=7] new reentrant thread created(idx=0) [2024-09-13 13:02:18.035758] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish init mtl38(cost_time_us=486, type="PN9oceanbase10rootserver21ObBackupTaskSchedulerE") [2024-09-13 13:02:18.035794] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=325, tg=0x2b07c09ed9b0, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09ed9b0) [2024-09-13 13:02:18.035803] INFO create (ob_backup_base_service.cpp:59) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=325, tg_name=SimpleLSService) [2024-09-13 13:02:18.036032] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20137][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1108101562368) [2024-09-13 13:02:18.036155] INFO register_pm (ob_page_manager.cpp:40) [20137][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07c8f52340, pm.get_tid()=20137, tenant_id=500) [2024-09-13 13:02:18.036197] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20137][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=31) [2024-09-13 13:02:18.036205] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20137][][T1][Y0-0000000000000000-0-0] [lt=7] new reentrant thread created(idx=0) [2024-09-13 13:02:18.036227] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish init mtl39(cost_time_us=457, type="PN9oceanbase10rootserver19ObBackupDataServiceE") [2024-09-13 13:02:18.036252] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=326, tg=0x2b07c09edb70, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09edb70) [2024-09-13 13:02:18.036257] INFO create (ob_backup_base_service.cpp:59) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=326, tg_name=SimpleLSService) [2024-09-13 13:02:18.036485] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20138][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1112396529664) [2024-09-13 13:02:18.036577] INFO register_pm (ob_page_manager.cpp:40) [20138][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07c8fd0340, pm.get_tid()=20138, tenant_id=500) [2024-09-13 13:02:18.036604] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20138][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=32) [2024-09-13 13:02:18.036608] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20138][][T1][Y0-0000000000000000-0-0] [lt=4] new reentrant thread created(idx=0) [2024-09-13 13:02:18.036623] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish init mtl40(cost_time_us=389, type="PN9oceanbase10rootserver20ObBackupCleanServiceE") [2024-09-13 13:02:18.036631] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=327, tg=0x2b07c09edd30, thread_cnt=1, tg->attr_={name:SimpleLSService, type:1}, tg=0x2b07c09edd30) [2024-09-13 13:02:18.036639] INFO create (ob_backup_base_service.cpp:59) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=327, tg_name=SimpleLSService) [2024-09-13 13:02:18.036871] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20139][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1116691496960) [2024-09-13 13:02:18.036975] INFO register_pm (ob_page_manager.cpp:40) [20139][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07c9056340, pm.get_tid()=20139, tenant_id=500) [2024-09-13 13:02:18.036998] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20139][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=33) [2024-09-13 13:02:18.037006] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20139][][T1][Y0-0000000000000000-0-0] [lt=7] new reentrant thread created(idx=0) [2024-09-13 13:02:18.037017] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish init mtl41(cost_time_us=390, type="PN9oceanbase10rootserver25ObArchiveSchedulerServiceE") [2024-09-13 13:02:18.037032] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl42(cost_time_us=8, type="PN9oceanbase7storage27ObTenantSSTableMergeInfoMgrE") [2024-09-13 13:02:18.037090] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl43(cost_time_us=51, type="PN9oceanbase5share26ObDagWarningHistoryManagerE") [2024-09-13 13:02:18.037149] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl44(cost_time_us=53, type="PN9oceanbase10compaction24ObScheduleSuspectInfoMgrE") [2024-09-13 13:02:18.037162] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl45(cost_time_us=3, type="PN9oceanbase7storage12ObLobManagerE") [2024-09-13 13:02:18.037234] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl46(cost_time_us=65, type="PN9oceanbase5share22ObGlobalAutoIncServiceE") [2024-09-13 13:02:18.037938] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] TimeWheelBase inited success(precision=10000, start_ticket=172620373803, scan_ticket=172620373803) [2024-09-13 13:02:18.037951] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] ObTimeWheel init success(precision=10000, real_thread_num=1) [2024-09-13 13:02:18.038007] INFO [DETECT] init (ob_deadlock_detector_mgr.cpp:202) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObDeadLockDetectorMgr init success(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.038055] INFO [DETECT] init (ob_deadlock_detector_mgr.cpp:204) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] ObDeadLockDetectorMgr init called(ret=0, ret="OB_SUCCESS", lbt()="0x24edc06b 0x11bb3348 0x11bb1e95 0x11a719a2 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.038065] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl47(cost_time_us=823, type="PN9oceanbase5share8detector21ObDeadLockDetectorMgrE") [2024-09-13 13:02:18.038796] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] TimeWheelBase inited success(precision=100000, start_ticket=17262037380, scan_ticket=17262037380) [2024-09-13 13:02:18.039503] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] TimeWheelBase inited success(precision=100000, start_ticket=17262037380, scan_ticket=17262037380) [2024-09-13 13:02:18.039513] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTimeWheel init success(precision=100000, real_thread_num=2) [2024-09-13 13:02:18.039518] INFO [STORAGE.TRANS] init (ob_trans_timer.cpp:188) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] transaction timer inited success [2024-09-13 13:02:18.039527] INFO [STORAGE.TRANS] init (ob_xa_service.cpp:74) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] xa service init(ret=0) [2024-09-13 13:02:18.039537] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl48(cost_time_us=1458, type="PN9oceanbase11transaction11ObXAServiceE") [2024-09-13 13:02:18.039551] INFO [STORAGE.TRANS] init (ob_gts_rpc.cpp:269) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] gts response rpc inited success(self="172.16.51.35:2882", this=0x2b07c25fd228) [2024-09-13 13:02:18.039563] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish init mtl49(cost_time_us=22, type="PN9oceanbase11transaction18ObTimestampServiceE") [2024-09-13 13:02:18.039571] INFO [STORAGE.TRANS] init (ob_gts_rpc.cpp:269) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] gts response rpc inited success(self="172.16.51.35:2882", this=0x2b07c25fd8a0) [2024-09-13 13:02:18.039582] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] create tg succeed(tg_id=328, tg=0x2b07c09adef0, thread_cnt=1, tg->attr_={name:StandbyTimestampService, type:2}, tg=0x2b07c09adef0) [2024-09-13 13:02:18.039591] INFO [STORAGE.TRANS] init (ob_standby_timestamp_service.cpp:47) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] standby timestamp service init succ(tenant_id=1) [2024-09-13 13:02:18.039596] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl50(cost_time_us=28, type="PN9oceanbase11transaction25ObStandbyTimestampServiceE") [2024-09-13 13:02:18.039604] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl51(cost_time_us=1, type="PN9oceanbase11transaction17ObTimestampAccessE") [2024-09-13 13:02:18.039617] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish init mtl52(cost_time_us=2, type="PN9oceanbase11transaction16ObTransIDServiceE") [2024-09-13 13:02:18.039621] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl53(cost_time_us=0, type="PN9oceanbase11transaction17ObUniqueIDServiceE") [2024-09-13 13:02:18.039625] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl54(cost_time_us=0, type="PN9oceanbase3sql17ObPlanBaselineMgrE") [2024-09-13 13:02:18.039634] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish init mtl55(cost_time_us=5, type="PN9oceanbase3sql9ObPsCacheE") [2024-09-13 13:02:18.041621] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=9] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}) [2024-09-13 13:02:18.041669] INFO [PALF] runTimerTask (block_gc_timer_task.cpp:101) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] BlockGCTimerTask success(ret=0, cost_time_us=55, palf_env_impl_={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):100, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:18.042055] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA1-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.042512] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DA1-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.043532] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=329, tg=0x2b07c09ef6e0, thread_cnt=1, tg->attr_={name:PlanCacheEvict, type:3}, tg=0x2b07c09ef6e0) [2024-09-13 13:02:18.043552] INFO init (ob_plan_cache.cpp:420) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] start tg(tg_id_=329, tg_name=PlanCacheEvict) [2024-09-13 13:02:18.043807] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20140][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1120986464256) [2024-09-13 13:02:18.043952] INFO register_pm (ob_page_manager.cpp:40) [20140][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07c90d4340, pm.get_tid()=20140, tenant_id=500) [2024-09-13 13:02:18.043987] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20140][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=34) [2024-09-13 13:02:18.044014] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA2-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.044015] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObTimer create success(this=0x2b07c09ef700, thread_id=20140, lbt()=0x24edc06b 0x13836960 0x115a4182 0xbcf6eed 0x11a71e5c 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.044026] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl56(cost_time_us=4385, type="PN9oceanbase3sql11ObPlanCacheE") [2024-09-13 13:02:18.044331] INFO run1 (ob_timer.cpp:361) [20140][][T1][Y0-0000000000000000-0-0] [lt=10] timer thread started(this=0x2b07c09ef700, tid=20140, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.044597] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DA2-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.052066] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DA3-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.052554] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DA3-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.053653] INFO [LIB] init (ob_detect_manager.cpp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] [DM] ObDetectManager init success(self="172.16.51.35:2882", tenant_id=1, mem_factor=1.875000000000000000e-01) [2024-09-13 13:02:18.053678] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=24] finish init mtl57(cost_time_us=9647, type="PN9oceanbase6common15ObDetectManagerE") [2024-09-13 13:02:18.058175] INFO [SQL.DTL] mtl_init (ob_dtl_fc_server.cpp:73) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] init tenant dfc(ret=0, tenant_dfc->tenant_id_=1) [2024-09-13 13:02:18.058201] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=21] finish init mtl58(cost_time_us=4512, type="PN9oceanbase3sql3dtl11ObTenantDfcE") [2024-09-13 13:02:18.058231] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl59(cost_time_us=24, type="PN9oceanbase3omt9ObPxPoolsE") [2024-09-13 13:02:18.058243] INFO [SERVER.OMT] init_compat_mode (ob_multi_tenant.cpp:305) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init compatibility mode(tenant_id=1, compat_mode=0) [2024-09-13 13:02:18.058255] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish init mtl60(cost_time_us=13, type="N9oceanbase3lib6Worker10CompatModeE") [2024-09-13 13:02:18.058523] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:18.059530] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] construct leaf queue idx succ(rq_.get_push_idx()=1, tenant_id=1) [2024-09-13 13:02:18.060797] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=28] construct leaf queue idx succ(rq_.get_push_idx()=2, tenant_id=1) [2024-09-13 13:02:18.061980] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] construct leaf queue idx succ(rq_.get_push_idx()=3, tenant_id=1) [2024-09-13 13:02:18.062677] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA4-0-0] [lt=25][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.063160] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DA4-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.063405] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DA5-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.063506] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] construct leaf queue idx succ(rq_.get_push_idx()=4, tenant_id=1) [2024-09-13 13:02:18.063770] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA5-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.064818] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] construct leaf queue idx succ(rq_.get_push_idx()=5, tenant_id=1) [2024-09-13 13:02:18.065839] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] construct leaf queue idx succ(rq_.get_push_idx()=6, tenant_id=1) [2024-09-13 13:02:18.066920] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] construct leaf queue idx succ(rq_.get_push_idx()=7, tenant_id=1) [2024-09-13 13:02:18.067000] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DA6-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.067404] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA6-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.067904] INFO [SERVER] init (ob_dl_queue.cpp:39) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] construct leaf queue idx succ(rq_.get_push_idx()=8, tenant_id=1) [2024-09-13 13:02:18.067925] INFO [SERVER] mtl_init (ob_mysql_request_manager.cpp:373) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] mtl init finish(tenant_id=1, mem_limit=322122547, queue_size=10000000, ret=0) [2024-09-13 13:02:18.067936] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl61(cost_time_us=9676, type="PN9oceanbase7obmysql21ObMySQLRequestManagerE") [2024-09-13 13:02:18.067953] INFO [STORAGE.TRANS] init (ob_tenant_weak_read_cluster_service.cpp:93) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] init succ(tenant_id=1, cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.067968] INFO [STORAGE.TRANS] init (ob_tenant_weak_read_cluster_service.cpp:94) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] init TenantWeakReadClusterService succeed(tenant_id=1, cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.067988] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=330, tg=0x2b07c09edef0, thread_cnt=1, tg->attr_={name:WeakRdSrv, type:2}, tg=0x2b07c09edef0) [2024-09-13 13:02:18.068036] INFO [STORAGE.TRANS] init (ob_tenant_weak_read_service.cpp:114) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] tenant weak read service init succ(tenant_id=1, lbt()="0x24edc06b 0x10381170 0x10389812 0x11a72198 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.068047] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish init mtl62(cost_time_us=102, type="PN9oceanbase11transaction23ObTenantWeakReadServiceE") [2024-09-13 13:02:18.068051] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl63(cost_time_us=0, type="PN9oceanbase3sql24ObTenantSqlMemoryManagerE") [2024-09-13 13:02:18.069786] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl64(cost_time_us=1726, type="PN9oceanbase3sql3dtl24ObDTLIntermResultManagerE") [2024-09-13 13:02:18.070641] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] create tg succeed(tg_id=331, tg=0x2b07c09ef8d0, thread_cnt=1, tg->attr_={name:ReqMemEvict, type:3}, tg=0x2b07c09ef8d0) [2024-09-13 13:02:18.075322] INFO init (ob_sql_plan_monitor_node_list.cpp:57) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=33] start tg(tg_id_=331, tg_name=ReqMemEvict) [2024-09-13 13:02:18.075622] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20141][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1125281431552) [2024-09-13 13:02:18.075796] INFO register_pm (ob_page_manager.cpp:40) [20141][][T0][Y0-0000000000000000-0-0] [lt=30] register pm finish(ret=0, &pm=0x2b07c9152340, pm.get_tid()=20141, tenant_id=500) [2024-09-13 13:02:18.075952] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20141][][T1][Y0-0000000000000000-0-0] [lt=130] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=35) [2024-09-13 13:02:18.075972] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=18] ObTimer create success(this=0x2b07c09ef8f0, thread_id=20141, lbt()=0x24edc06b 0x13836960 0x115a4182 0x11994d5e 0x11a72336 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.075989] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish init mtl65(cost_time_us=6191, type="PN9oceanbase3sql21ObPlanMonitorNodeListE") [2024-09-13 13:02:18.076017] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl66(cost_time_us=20, type="PN9oceanbase3sql19ObDataAccessServiceE") [2024-09-13 13:02:18.076026] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl67(cost_time_us=2, type="PN9oceanbase3sql14ObDASIDServiceE") [2024-09-13 13:02:18.076038] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl68(cost_time_us=6, type="PN9oceanbase5share6schema21ObTenantSchemaServiceE") [2024-09-13 13:02:18.076383] INFO run1 (ob_timer.cpp:361) [20141][][T1][Y0-0000000000000000-0-0] [lt=13] timer thread started(this=0x2b07c09ef8f0, tid=20141, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.076674] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20142][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1129576398848) [2024-09-13 13:02:18.076845] INFO register_pm (ob_page_manager.cpp:40) [20142][][T0][Y0-0000000000000000-0-0] [lt=29] register pm finish(ret=0, &pm=0x2b07c91d0340, pm.get_tid()=20142, tenant_id=500) [2024-09-13 13:02:18.076891] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] init thread success(this=0x2b07baf6c5b0, id=8, ret=0) [2024-09-13 13:02:18.076893] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20142][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=36) [2024-09-13 13:02:18.076907] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] thread is running function [2024-09-13 13:02:18.076930] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0xf7eada2 0xf7ea0e9 0x11a7255e 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.076943] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:525) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] thread_pool_ init success(thread_pool_={this:0x2b07a0dfab58, block_ptr_.control_ptr:0x2b07c09f50f0, block_ptr_.data_ptr:0x2b07c09f5170}, thread_num_=0, queue_size_square_of_2_=0) [2024-09-13 13:02:18.077620] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20143][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1133871366144) [2024-09-13 13:02:18.077762] INFO register_pm (ob_page_manager.cpp:40) [20143][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07cc656340, pm.get_tid()=20143, tenant_id=500) [2024-09-13 13:02:18.077782] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] init thread success(this=0x2b07baf6c710, id=9, ret=0) [2024-09-13 13:02:18.077787] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20143][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=37) [2024-09-13 13:02:18.077795] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20143][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] thread is running function [2024-09-13 13:02:18.078013] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20144][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1138166333440) [2024-09-13 13:02:18.078135] INFO register_pm (ob_page_manager.cpp:40) [20144][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07cc6d4340, pm.get_tid()=20144, tenant_id=500) [2024-09-13 13:02:18.078156] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] init thread success(this=0x2b07baf6c7b0, id=10, ret=0) [2024-09-13 13:02:18.078156] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20144][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=38) [2024-09-13 13:02:18.078163] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20144][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] thread is running function [2024-09-13 13:02:18.078387] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20145][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1142461300736) [2024-09-13 13:02:18.078561] INFO register_pm (ob_page_manager.cpp:40) [20145][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07cc752340, pm.get_tid()=20145, tenant_id=500) [2024-09-13 13:02:18.078597] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init thread success(this=0x2b07baf6c850, id=11, ret=0) [2024-09-13 13:02:18.078598] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20145][][T1][Y0-0000000000000000-0-0] [lt=23] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=39) [2024-09-13 13:02:18.078605] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20145][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] thread is running function [2024-09-13 13:02:18.078818] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20146][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1146756268032) [2024-09-13 13:02:18.078930] INFO register_pm (ob_page_manager.cpp:40) [20146][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07cc7d0340, pm.get_tid()=20146, tenant_id=500) [2024-09-13 13:02:18.078956] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init thread success(this=0x2b07baf6c8f0, id=12, ret=0) [2024-09-13 13:02:18.078957] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20146][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=40) [2024-09-13 13:02:18.078964] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20146][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] thread is running function [2024-09-13 13:02:18.079543] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20147][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1151051235328) [2024-09-13 13:02:18.079660] INFO register_pm (ob_page_manager.cpp:40) [20147][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07cc856340, pm.get_tid()=20147, tenant_id=500) [2024-09-13 13:02:18.079689] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init thread success(this=0x2b07baf6c990, id=13, ret=0) [2024-09-13 13:02:18.079690] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20147][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=41) [2024-09-13 13:02:18.079702] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20147][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9] thread is running function [2024-09-13 13:02:18.079707] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init occam thread pool success(ret=0, thread_num=5, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0xf7eada2 0xf7ea10a 0x11a7255e 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.079721] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:525) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] thread_pool_ init success(thread_pool_={this:0x2b07a0dfacb0, block_ptr_.control_ptr:0x2b07c725f0f0, block_ptr_.data_ptr:0x2b07c725f170}, thread_num_=0, queue_size_square_of_2_=0) [2024-09-13 13:02:18.080401] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] TimeWheelBase inited success(precision=100000, start_ticket=17262037380, scan_ticket=17262037380) [2024-09-13 13:02:18.080413] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObTimeWheel init success(precision=100000, real_thread_num=1) [2024-09-13 13:02:18.080681] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20148][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1155346202624) [2024-09-13 13:02:18.080798] INFO register_pm (ob_page_manager.cpp:40) [20148][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07cc8d4340, pm.get_tid()=20148, tenant_id=500) [2024-09-13 13:02:18.080830] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimeWheel start success(timer_name="FrzTrigger") [2024-09-13 13:02:18.080831] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20148][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=42) [2024-09-13 13:02:18.080840] INFO [OCCAM] init_and_start (ob_occam_timer.h:546) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] init ObOccamTimer success(ret=0) [2024-09-13 13:02:18.080847] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl69(cost_time_us=4796, type="PN9oceanbase7storage15ObTenantFreezerE") [2024-09-13 13:02:18.080863] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=332, tg=0x2b07c725fc80, thread_cnt=3, tg->attr_={name:LSFreeze, type:4}, tg=0x2b07c725fc80) [2024-09-13 13:02:18.081136] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20149][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1159641169920) [2024-09-13 13:02:18.081591] INFO register_pm (ob_page_manager.cpp:40) [20149][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07cc952340, pm.get_tid()=20149, tenant_id=500) [2024-09-13 13:02:18.081614] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20149][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=43) [2024-09-13 13:02:18.081846] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20150][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1163936137216) [2024-09-13 13:02:18.081952] INFO register_pm (ob_page_manager.cpp:40) [20150][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07cc9d0340, pm.get_tid()=20150, tenant_id=500) [2024-09-13 13:02:18.082001] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20150][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=44) [2024-09-13 13:02:18.082269] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20151][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1168231104512) [2024-09-13 13:02:18.082410] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DA7-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.082457] INFO register_pm (ob_page_manager.cpp:40) [20151][][T0][Y0-0000000000000000-0-0] [lt=83] register pm finish(ret=0, &pm=0x2b07cce56340, pm.get_tid()=20151, tenant_id=500) [2024-09-13 13:02:18.082476] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20151][][T1][Y0-0000000000000000-0-0] [lt=9] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=45) [2024-09-13 13:02:18.082476] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] simple thread pool init success(name=LSFreeze, thread_num=3, task_num_limit=5) [2024-09-13 13:02:18.082486] INFO init (ob_ls_freeze_thread.cpp:88) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=332, tg_name=LSFreeze) [2024-09-13 13:02:18.082494] INFO [STORAGE] init (ob_ls_freeze_thread.cpp:108) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObLSFreezeThread init finished(ret=0) [2024-09-13 13:02:18.082508] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl70(cost_time_us=1648, type="PN9oceanbase7storage10checkpoint19ObCheckPointServiceE") [2024-09-13 13:02:18.082518] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl71(cost_time_us=3, type="PN9oceanbase7storage10checkpoint17ObTabletGCServiceE") [2024-09-13 13:02:18.082531] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObSliceAlloc init finished(bsize_=7936, isize_=256, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.082549] INFO [ARCHIVE] init (large_buffer_pool.cpp:53) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] LargeBufferPool init succ(this={inited:true, total_limit:1073741824, label:"ArcSendTask", array:[]}) [2024-09-13 13:02:18.082563] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObSliceAlloc init finished(bsize_=7936, isize_=88, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.082572] INFO [ARCHIVE] init (ob_archive_round_mgr.cpp:45) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObArchiveRoundMgr init succ [2024-09-13 13:02:18.082933] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA7-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.091242] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DA8-0-0] [lt=42][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.091655] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA8-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.093231] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.093316] INFO [ARCHIVE] init (ob_archive_scheduler.cpp:64) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] archive scheduler init succ(tenant_id=1) [2024-09-13 13:02:18.093342] INFO [ARCHIVE] init (ob_ls_meta_recorder.cpp:158) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=19] ObLSMetaRecorder init succ [2024-09-13 13:02:18.093352] INFO [ARCHIVE] init (ob_archive_timer.cpp:60) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ObArchiveTimer init succ [2024-09-13 13:02:18.093366] INFO [ARCHIVE] init (ob_archive_service.cpp:105) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] archive service init succ(tenant_id=1) [2024-09-13 13:02:18.093374] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl72(cost_time_us=10851, type="PN9oceanbase7archive16ObArchiveServiceE") [2024-09-13 13:02:18.093600] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DA9-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.093896] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=7] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.094007] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DA9-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.094103] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=6] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.094110] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20152][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1172526071808) [2024-09-13 13:02:18.094125] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.094136] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.094263] INFO register_pm (ob_page_manager.cpp:40) [20152][][T0][Y0-0000000000000000-0-0] [lt=22] register pm finish(ret=0, &pm=0x2b07cced4340, pm.get_tid()=20152, tenant_id=500) [2024-09-13 13:02:18.094301] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20152][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=46) [2024-09-13 13:02:18.094309] INFO [COMMON] run1 (ob_dedup_queue.cpp:361) [20152][][T1][Y0-0000000000000000-0-0] [lt=7] dedup queue thread start(this=0x2b07c33e4070) [2024-09-13 13:02:18.094333] INFO [COMMON] init (ob_dedup_queue.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] init dedup-queue:(thread_num=1, queue_size=10000, task_map_size=10000, total_mem_limit=536870912, hold_mem_limit=268435456, page_size=65408, this=0x2b07c33e4070, lbt="0x24edc06b 0x13820f43 0x13820411 0x10ad5904 0x11a72786 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.094412] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.094738] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.095140] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=9] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.096219] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.096882] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl73(cost_time_us=3500, type="PN9oceanbase7storage23ObTenantTabletSchedulerE") [2024-09-13 13:02:18.097937] INFO [COMMON] get_default_config (ob_dag_scheduler.cpp:1815) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] calc default config(work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.097967] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=21] create tg succeed(tg_id=333, tg=0x2b07bf1f9ec0, thread_cnt=1, tg->attr_={name:DagScheduler, type:2}, tg=0x2b07bf1f9ec0) [2024-09-13 13:02:18.097982] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] create tg succeed(tg_id=334, tg=0x2b07c09efc50, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c09efc50) [2024-09-13 13:02:18.097993] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=334, tg_name=DagWorker) [2024-09-13 13:02:18.098267] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20153][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1176821039104) [2024-09-13 13:02:18.098404] INFO register_pm (ob_page_manager.cpp:40) [20153][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07ccf52340, pm.get_tid()=20153, tenant_id=500) [2024-09-13 13:02:18.098431] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20153][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=47) [2024-09-13 13:02:18.098433] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=335, tg=0x2b07bf1fbec0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07bf1fbec0) [2024-09-13 13:02:18.098450] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] start tg(tg_id_=335, tg_name=DagWorker) [2024-09-13 13:02:18.098708] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20154][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1181116006400) [2024-09-13 13:02:18.098831] INFO register_pm (ob_page_manager.cpp:40) [20154][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07ccfd0340, pm.get_tid()=20154, tenant_id=500) [2024-09-13 13:02:18.098851] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20154][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=48) [2024-09-13 13:02:18.098854] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=336, tg=0x2b07c09f5ac0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c09f5ac0) [2024-09-13 13:02:18.098862] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=336, tg_name=DagWorker) [2024-09-13 13:02:18.099116] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20155][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1185410973696) [2024-09-13 13:02:18.099214] INFO register_pm (ob_page_manager.cpp:40) [20155][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07ce056340, pm.get_tid()=20155, tenant_id=500) [2024-09-13 13:02:18.099248] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20155][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=49) [2024-09-13 13:02:18.099250] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=337, tg=0x2b07c09f5d90, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c09f5d90) [2024-09-13 13:02:18.099257] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=337, tg_name=DagWorker) [2024-09-13 13:02:18.099523] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20156][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1189705940992) [2024-09-13 13:02:18.099646] INFO register_pm (ob_page_manager.cpp:40) [20156][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07ce0d4340, pm.get_tid()=20156, tenant_id=500) [2024-09-13 13:02:18.099671] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20156][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=50) [2024-09-13 13:02:18.099673] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=338, tg=0x2b07c7261280, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7261280) [2024-09-13 13:02:18.099680] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=338, tg_name=DagWorker) [2024-09-13 13:02:18.099918] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20157][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1194000908288) [2024-09-13 13:02:18.100013] INFO register_pm (ob_page_manager.cpp:40) [20157][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07ce152340, pm.get_tid()=20157, tenant_id=500) [2024-09-13 13:02:18.100041] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20157][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=51) [2024-09-13 13:02:18.100043] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=339, tg=0x2b07c7261550, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7261550) [2024-09-13 13:02:18.100049] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=339, tg_name=DagWorker) [2024-09-13 13:02:18.100280] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20158][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1198295875584) [2024-09-13 13:02:18.100400] INFO register_pm (ob_page_manager.cpp:40) [20158][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07ce1d0340, pm.get_tid()=20158, tenant_id=500) [2024-09-13 13:02:18.100455] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20158][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=52) [2024-09-13 13:02:18.100456] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=340, tg=0x2b07c7261820, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7261820) [2024-09-13 13:02:18.100463] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=340, tg_name=DagWorker) [2024-09-13 13:02:18.100694] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20159][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1202590842880) [2024-09-13 13:02:18.100798] INFO register_pm (ob_page_manager.cpp:40) [20159][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07ce256340, pm.get_tid()=20159, tenant_id=500) [2024-09-13 13:02:18.100828] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20159][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=53) [2024-09-13 13:02:18.100829] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=341, tg=0x2b07c7261af0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7261af0) [2024-09-13 13:02:18.100836] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=341, tg_name=DagWorker) [2024-09-13 13:02:18.101022] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20160][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1206885810176) [2024-09-13 13:02:18.101138] INFO register_pm (ob_page_manager.cpp:40) [20160][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07ce2d4340, pm.get_tid()=20160, tenant_id=500) [2024-09-13 13:02:18.101179] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=342, tg=0x2b07c7261dc0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7261dc0) [2024-09-13 13:02:18.101180] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20160][][T1][Y0-0000000000000000-0-0] [lt=22] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=54) [2024-09-13 13:02:18.101189] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=342, tg_name=DagWorker) [2024-09-13 13:02:18.101382] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20161][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1211180777472) [2024-09-13 13:02:18.101535] INFO register_pm (ob_page_manager.cpp:40) [20161][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07ce352340, pm.get_tid()=20161, tenant_id=500) [2024-09-13 13:02:18.101568] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20161][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=55) [2024-09-13 13:02:18.101570] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=343, tg=0x2b07c725d230, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c725d230) [2024-09-13 13:02:18.101577] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=343, tg_name=DagWorker) [2024-09-13 13:02:18.101802] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20162][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1215475744768) [2024-09-13 13:02:18.101962] INFO register_pm (ob_page_manager.cpp:40) [20162][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07ce3d0340, pm.get_tid()=20162, tenant_id=500) [2024-09-13 13:02:18.102007] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20162][][T1][Y0-0000000000000000-0-0] [lt=32] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=56) [2024-09-13 13:02:18.102009] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=344, tg=0x2b07c725d500, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c725d500) [2024-09-13 13:02:18.102021] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start tg(tg_id_=344, tg_name=DagWorker) [2024-09-13 13:02:18.102254] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20163][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1219770712064) [2024-09-13 13:02:18.102351] INFO register_pm (ob_page_manager.cpp:40) [20163][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07ce456340, pm.get_tid()=20163, tenant_id=500) [2024-09-13 13:02:18.102374] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20163][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=57) [2024-09-13 13:02:18.102376] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=345, tg=0x2b07c725d7d0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c725d7d0) [2024-09-13 13:02:18.102383] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=345, tg_name=DagWorker) [2024-09-13 13:02:18.102616] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20164][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1224065679360) [2024-09-13 13:02:18.102726] INFO register_pm (ob_page_manager.cpp:40) [20164][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07ce4d4340, pm.get_tid()=20164, tenant_id=500) [2024-09-13 13:02:18.102749] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20164][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=58) [2024-09-13 13:02:18.102751] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=346, tg=0x2b07c725daa0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c725daa0) [2024-09-13 13:02:18.102757] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=346, tg_name=DagWorker) [2024-09-13 13:02:18.102835] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DAA-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.102986] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20165][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1228360646656) [2024-09-13 13:02:18.103130] INFO register_pm (ob_page_manager.cpp:40) [20165][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ce552340, pm.get_tid()=20165, tenant_id=500) [2024-09-13 13:02:18.103147] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20165][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=59) [2024-09-13 13:02:18.103149] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=347, tg=0x2b07c725dd70, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c725dd70) [2024-09-13 13:02:18.103157] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=347, tg_name=DagWorker) [2024-09-13 13:02:18.103345] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DAA-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.103394] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20166][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1232655613952) [2024-09-13 13:02:18.103493] INFO register_pm (ob_page_manager.cpp:40) [20166][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07ce5d0340, pm.get_tid()=20166, tenant_id=500) [2024-09-13 13:02:18.103510] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20166][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=60) [2024-09-13 13:02:18.103519] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=348, tg=0x2b07c726e1c0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726e1c0) [2024-09-13 13:02:18.103529] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=348, tg_name=DagWorker) [2024-09-13 13:02:18.103566] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DAB-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.103780] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20167][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1236950581248) [2024-09-13 13:02:18.103895] INFO register_pm (ob_page_manager.cpp:40) [20167][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07ce656340, pm.get_tid()=20167, tenant_id=500) [2024-09-13 13:02:18.103920] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20167][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=61) [2024-09-13 13:02:18.103922] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=349, tg=0x2b07c726e490, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726e490) [2024-09-13 13:02:18.103928] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=349, tg_name=DagWorker) [2024-09-13 13:02:18.103966] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DAB-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.104181] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20168][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1241245548544) [2024-09-13 13:02:18.104305] INFO register_pm (ob_page_manager.cpp:40) [20168][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07ce6d4340, pm.get_tid()=20168, tenant_id=500) [2024-09-13 13:02:18.104329] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=350, tg=0x2b07c726e760, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726e760) [2024-09-13 13:02:18.104335] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=350, tg_name=DagWorker) [2024-09-13 13:02:18.104328] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20168][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=62) [2024-09-13 13:02:18.104578] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20169][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1245540515840) [2024-09-13 13:02:18.104701] INFO register_pm (ob_page_manager.cpp:40) [20169][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07ce752340, pm.get_tid()=20169, tenant_id=500) [2024-09-13 13:02:18.104727] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20169][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=63) [2024-09-13 13:02:18.104728] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=351, tg=0x2b07c726ea30, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726ea30) [2024-09-13 13:02:18.104735] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=351, tg_name=DagWorker) [2024-09-13 13:02:18.104954] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20170][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1249835483136) [2024-09-13 13:02:18.105057] INFO register_pm (ob_page_manager.cpp:40) [20170][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ce7d0340, pm.get_tid()=20170, tenant_id=500) [2024-09-13 13:02:18.105079] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20170][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=64) [2024-09-13 13:02:18.105080] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=352, tg=0x2b07c726ed00, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726ed00) [2024-09-13 13:02:18.105087] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=352, tg_name=DagWorker) [2024-09-13 13:02:18.105317] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20171][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1254130450432) [2024-09-13 13:02:18.105449] INFO register_pm (ob_page_manager.cpp:40) [20171][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07ce856340, pm.get_tid()=20171, tenant_id=500) [2024-09-13 13:02:18.105473] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20171][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=65) [2024-09-13 13:02:18.105480] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=353, tg=0x2b07c726efd0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726efd0) [2024-09-13 13:02:18.105490] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=353, tg_name=DagWorker) [2024-09-13 13:02:18.105723] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20172][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1258425417728) [2024-09-13 13:02:18.105810] INFO register_pm (ob_page_manager.cpp:40) [20172][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ce8d4340, pm.get_tid()=20172, tenant_id=500) [2024-09-13 13:02:18.105847] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20172][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=66) [2024-09-13 13:02:18.105848] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=354, tg=0x2b07c726f2a0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726f2a0) [2024-09-13 13:02:18.105855] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=354, tg_name=DagWorker) [2024-09-13 13:02:18.106073] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20173][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1262720385024) [2024-09-13 13:02:18.106160] INFO register_pm (ob_page_manager.cpp:40) [20173][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ce952340, pm.get_tid()=20173, tenant_id=500) [2024-09-13 13:02:18.106188] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20173][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=67) [2024-09-13 13:02:18.106189] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=355, tg=0x2b07c726f570, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726f570) [2024-09-13 13:02:18.106196] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=355, tg_name=DagWorker) [2024-09-13 13:02:18.106421] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20174][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1267015352320) [2024-09-13 13:02:18.106524] INFO register_pm (ob_page_manager.cpp:40) [20174][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ce9d0340, pm.get_tid()=20174, tenant_id=500) [2024-09-13 13:02:18.106546] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20174][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=68) [2024-09-13 13:02:18.106548] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=356, tg=0x2b07c726f840, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726f840) [2024-09-13 13:02:18.106562] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] start tg(tg_id_=356, tg_name=DagWorker) [2024-09-13 13:02:18.106783] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20175][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1271310319616) [2024-09-13 13:02:18.106885] INFO register_pm (ob_page_manager.cpp:40) [20175][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cea56340, pm.get_tid()=20175, tenant_id=500) [2024-09-13 13:02:18.106913] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20175][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=69) [2024-09-13 13:02:18.106915] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=357, tg=0x2b07c726fb10, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726fb10) [2024-09-13 13:02:18.106921] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=357, tg_name=DagWorker) [2024-09-13 13:02:18.107153] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20176][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1275605286912) [2024-09-13 13:02:18.107239] INFO register_pm (ob_page_manager.cpp:40) [20176][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cead4340, pm.get_tid()=20176, tenant_id=500) [2024-09-13 13:02:18.107260] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20176][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=70) [2024-09-13 13:02:18.107262] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=358, tg=0x2b07c726fde0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c726fde0) [2024-09-13 13:02:18.107269] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=358, tg_name=DagWorker) [2024-09-13 13:02:18.107485] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20177][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1279900254208) [2024-09-13 13:02:18.107578] INFO register_pm (ob_page_manager.cpp:40) [20177][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07ceb52340, pm.get_tid()=20177, tenant_id=500) [2024-09-13 13:02:18.107596] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20177][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=71) [2024-09-13 13:02:18.107604] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=359, tg=0x2b07c72741c0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c72741c0) [2024-09-13 13:02:18.107613] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=359, tg_name=DagWorker) [2024-09-13 13:02:18.107829] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=16] PNIO [ratelimit] time: 1726203738107828, bytes: 1985513, bw: 1.115153 MB/s, add_ts: 1003137, add_bytes: 1172991 [2024-09-13 13:02:18.107846] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20178][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1284195221504) [2024-09-13 13:02:18.107935] INFO register_pm (ob_page_manager.cpp:40) [20178][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07cebd0340, pm.get_tid()=20178, tenant_id=500) [2024-09-13 13:02:18.107952] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20178][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=72) [2024-09-13 13:02:18.107955] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=360, tg=0x2b07c7274490, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7274490) [2024-09-13 13:02:18.107961] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=360, tg_name=DagWorker) [2024-09-13 13:02:18.108152] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DAC-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.108187] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20179][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1288490188800) [2024-09-13 13:02:18.108276] INFO register_pm (ob_page_manager.cpp:40) [20179][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07cec56340, pm.get_tid()=20179, tenant_id=500) [2024-09-13 13:02:18.108293] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20179][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=73) [2024-09-13 13:02:18.108295] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=361, tg=0x2b07c7274760, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7274760) [2024-09-13 13:02:18.108302] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=361, tg_name=DagWorker) [2024-09-13 13:02:18.108559] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20180][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1292785156096) [2024-09-13 13:02:18.108617] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DAC-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.108673] INFO register_pm (ob_page_manager.cpp:40) [20180][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07cecd4340, pm.get_tid()=20180, tenant_id=500) [2024-09-13 13:02:18.108691] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20180][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=74) [2024-09-13 13:02:18.108693] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] create tg succeed(tg_id=362, tg=0x2b07c7274a30, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7274a30) [2024-09-13 13:02:18.108699] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] start tg(tg_id_=362, tg_name=DagWorker) [2024-09-13 13:02:18.108915] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20181][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1297080123392) [2024-09-13 13:02:18.109005] INFO register_pm (ob_page_manager.cpp:40) [20181][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ced52340, pm.get_tid()=20181, tenant_id=500) [2024-09-13 13:02:18.109024] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20181][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=75) [2024-09-13 13:02:18.109025] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=363, tg=0x2b07c7274d00, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7274d00) [2024-09-13 13:02:18.109032] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=363, tg_name=DagWorker) [2024-09-13 13:02:18.109251] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20182][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1301375090688) [2024-09-13 13:02:18.109360] INFO register_pm (ob_page_manager.cpp:40) [20182][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07cedd0340, pm.get_tid()=20182, tenant_id=500) [2024-09-13 13:02:18.109385] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20182][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=76) [2024-09-13 13:02:18.109393] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=364, tg=0x2b07c7274fd0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7274fd0) [2024-09-13 13:02:18.109402] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=364, tg_name=DagWorker) [2024-09-13 13:02:18.109651] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20183][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1305670057984) [2024-09-13 13:02:18.109744] INFO register_pm (ob_page_manager.cpp:40) [20183][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cee56340, pm.get_tid()=20183, tenant_id=500) [2024-09-13 13:02:18.109767] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20183][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=77) [2024-09-13 13:02:18.109769] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=365, tg=0x2b07c72752a0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c72752a0) [2024-09-13 13:02:18.109776] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=365, tg_name=DagWorker) [2024-09-13 13:02:18.109999] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20184][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1309965025280) [2024-09-13 13:02:18.110081] INFO register_pm (ob_page_manager.cpp:40) [20184][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07ceed4340, pm.get_tid()=20184, tenant_id=500) [2024-09-13 13:02:18.110104] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20184][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=78) [2024-09-13 13:02:18.110106] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=366, tg=0x2b07c7275570, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7275570) [2024-09-13 13:02:18.110117] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=366, tg_name=DagWorker) [2024-09-13 13:02:18.110344] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20185][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1314259992576) [2024-09-13 13:02:18.110459] INFO register_pm (ob_page_manager.cpp:40) [20185][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07cef52340, pm.get_tid()=20185, tenant_id=500) [2024-09-13 13:02:18.110483] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20185][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=79) [2024-09-13 13:02:18.110484] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=367, tg=0x2b07c7275840, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7275840) [2024-09-13 13:02:18.110496] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] start tg(tg_id_=367, tg_name=DagWorker) [2024-09-13 13:02:18.110680] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20186][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1318554959872) [2024-09-13 13:02:18.110767] INFO register_pm (ob_page_manager.cpp:40) [20186][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cefd0340, pm.get_tid()=20186, tenant_id=500) [2024-09-13 13:02:18.110789] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20186][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=80) [2024-09-13 13:02:18.110791] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=368, tg=0x2b07c7275b10, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7275b10) [2024-09-13 13:02:18.110797] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=368, tg_name=DagWorker) [2024-09-13 13:02:18.111005] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20187][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1322849927168) [2024-09-13 13:02:18.111096] INFO register_pm (ob_page_manager.cpp:40) [20187][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cf056340, pm.get_tid()=20187, tenant_id=500) [2024-09-13 13:02:18.111119] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20187][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=81) [2024-09-13 13:02:18.111121] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=369, tg=0x2b07c7275de0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c7275de0) [2024-09-13 13:02:18.111127] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=369, tg_name=DagWorker) [2024-09-13 13:02:18.111367] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20188][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1327144894464) [2024-09-13 13:02:18.111473] INFO register_pm (ob_page_manager.cpp:40) [20188][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07cf0d4340, pm.get_tid()=20188, tenant_id=500) [2024-09-13 13:02:18.111496] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20188][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=82) [2024-09-13 13:02:18.111505] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=370, tg=0x2b07c727a1c0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727a1c0) [2024-09-13 13:02:18.111512] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=370, tg_name=DagWorker) [2024-09-13 13:02:18.111729] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20189][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1331439861760) [2024-09-13 13:02:18.111818] INFO register_pm (ob_page_manager.cpp:40) [20189][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cf152340, pm.get_tid()=20189, tenant_id=500) [2024-09-13 13:02:18.111837] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20189][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=83) [2024-09-13 13:02:18.111839] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=371, tg=0x2b07c727a490, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727a490) [2024-09-13 13:02:18.111846] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=371, tg_name=DagWorker) [2024-09-13 13:02:18.112065] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20190][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1335734829056) [2024-09-13 13:02:18.112158] INFO register_pm (ob_page_manager.cpp:40) [20190][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cf1d0340, pm.get_tid()=20190, tenant_id=500) [2024-09-13 13:02:18.112175] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20190][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=84) [2024-09-13 13:02:18.112178] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] create tg succeed(tg_id=372, tg=0x2b07c727a760, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727a760) [2024-09-13 13:02:18.112185] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=372, tg_name=DagWorker) [2024-09-13 13:02:18.112431] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20191][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1340029796352) [2024-09-13 13:02:18.112542] INFO register_pm (ob_page_manager.cpp:40) [20191][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07cf256340, pm.get_tid()=20191, tenant_id=500) [2024-09-13 13:02:18.112560] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20191][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=85) [2024-09-13 13:02:18.112563] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=373, tg=0x2b07c727aa30, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727aa30) [2024-09-13 13:02:18.112572] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=373, tg_name=DagWorker) [2024-09-13 13:02:18.113136] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20192][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1344324763648) [2024-09-13 13:02:18.113248] INFO register_pm (ob_page_manager.cpp:40) [20192][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07cf2d4340, pm.get_tid()=20192, tenant_id=500) [2024-09-13 13:02:18.113266] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20192][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=86) [2024-09-13 13:02:18.113268] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=374, tg=0x2b07c727ad00, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727ad00) [2024-09-13 13:02:18.113286] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] start tg(tg_id_=374, tg_name=DagWorker) [2024-09-13 13:02:18.113531] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20193][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1348619730944) [2024-09-13 13:02:18.113629] INFO register_pm (ob_page_manager.cpp:40) [20193][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07cf352340, pm.get_tid()=20193, tenant_id=500) [2024-09-13 13:02:18.113657] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20193][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=87) [2024-09-13 13:02:18.113663] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=375, tg=0x2b07c727afd0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727afd0) [2024-09-13 13:02:18.113671] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=375, tg_name=DagWorker) [2024-09-13 13:02:18.113888] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20194][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1352914698240) [2024-09-13 13:02:18.113979] INFO register_pm (ob_page_manager.cpp:40) [20194][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cf3d0340, pm.get_tid()=20194, tenant_id=500) [2024-09-13 13:02:18.114000] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20194][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=88) [2024-09-13 13:02:18.114002] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=376, tg=0x2b07c727b2a0, thread_cnt=1, tg->attr_={name:DagWorker, type:2}, tg=0x2b07c727b2a0) [2024-09-13 13:02:18.114009] INFO start (ob_dag_scheduler.cpp:1367) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=376, tg_name=DagWorker) [2024-09-13 13:02:18.114185] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20195][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1357209665536) [2024-09-13 13:02:18.114278] INFO register_pm (ob_page_manager.cpp:40) [20195][][T0][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07cf456340, pm.get_tid()=20195, tenant_id=500) [2024-09-13 13:02:18.114299] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20195][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=89) [2024-09-13 13:02:18.114300] INFO init (ob_dag_scheduler.cpp:1659) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=333, tg_name=DagScheduler) [2024-09-13 13:02:18.114491] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20196][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1361504632832) [2024-09-13 13:02:18.114605] INFO register_pm (ob_page_manager.cpp:40) [20196][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07cf4d4340, pm.get_tid()=20196, tenant_id=500) [2024-09-13 13:02:18.114624] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20196][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=90) [2024-09-13 13:02:18.114629] INFO [COMMON] init (ob_dag_scheduler.cpp:1671) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObTenantDagScheduler is inited(ret=0, work_thread_num=43) [2024-09-13 13:02:18.114636] INFO [COMMON] mtl_init (ob_dag_scheduler.cpp:1540) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] success to init ObTenantDagScheduler for tenant(ret=0, MTL_ID()=1, scheduler=0x2b07c33d4900) [2024-09-13 13:02:18.114646] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl74(cost_time_us=17747, type="PN9oceanbase5share20ObTenantDagSchedulerE") [2024-09-13 13:02:18.114657] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl75(cost_time_us=7, type="PN9oceanbase7storage18ObStorageHAServiceE") [2024-09-13 13:02:18.114677] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=377, tg=0x2b07c727b3e0, thread_cnt=1, tg->attr_={name:FreInfoReload, type:3}, tg=0x2b07c727b3e0) [2024-09-13 13:02:18.114689] INFO init (ob_tenant_freeze_info_mgr.cpp:98) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start tg(tg_id_=377, tg_name=FreInfoReload) [2024-09-13 13:02:18.115045] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20197][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1365799600128) [2024-09-13 13:02:18.115150] INFO register_pm (ob_page_manager.cpp:40) [20197][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07cf552340, pm.get_tid()=20197, tenant_id=500) [2024-09-13 13:02:18.115176] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20197][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=91) [2024-09-13 13:02:18.115218] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c727b400, thread_id=20197, lbt()=0x24edc06b 0x13836960 0x115a4182 0x10abbd34 0x11a72924 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.115257] INFO [STORAGE] mtl_init (ob_tenant_freeze_info_mgr.cpp:80) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] success to init TenantFreezeInfoMgr(MTL_ID()=1) [2024-09-13 13:02:18.115271] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] finish init mtl76(cost_time_us=611, type="PN9oceanbase7storage21ObTenantFreezeInfoMgrE") [2024-09-13 13:02:18.115286] INFO [STORAGE.TRANS] init (ob_tx_loop_worker.cpp:41) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] [Tx Loop Worker] init [2024-09-13 13:02:18.115306] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl77(cost_time_us=15, type="PN9oceanbase11transaction14ObTxLoopWorkerE") [2024-09-13 13:02:18.115318] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl78(cost_time_us=3, type="PN9oceanbase7storage15ObAccessServiceE") [2024-09-13 13:02:18.115333] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl79(cost_time_us=8, type="PN9oceanbase7storage17ObTransferServiceE") [2024-09-13 13:02:18.115353] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] create tg succeed(tg_id=378, tg=0x2b07c727b5d0, thread_cnt=4, tg->attr_={name:TransferSrv, type:1}, tg=0x2b07c727b5d0) [2024-09-13 13:02:18.115370] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] start tg(tg_id_=378, tg_name=TransferSrv) [2024-09-13 13:02:18.115492] INFO run1 (ob_timer.cpp:361) [20197][][T1][Y0-0000000000000000-0-0] [lt=5] timer thread started(this=0x2b07c727b400, tid=20197, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.115637] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20198][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1370094567424) [2024-09-13 13:02:18.115749] INFO register_pm (ob_page_manager.cpp:40) [20198][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07cf5d0340, pm.get_tid()=20198, tenant_id=500) [2024-09-13 13:02:18.115773] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20198][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=92) [2024-09-13 13:02:18.115779] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20198][][T1][Y0-0000000000000000-0-0] [lt=5] new reentrant thread created(idx=0) [2024-09-13 13:02:18.116075] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20199][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1374389534720) [2024-09-13 13:02:18.116166] INFO register_pm (ob_page_manager.cpp:40) [20199][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07cf656340, pm.get_tid()=20199, tenant_id=500) [2024-09-13 13:02:18.116189] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20199][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=93) [2024-09-13 13:02:18.116194] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20199][][T1][Y0-0000000000000000-0-0] [lt=4] new reentrant thread created(idx=1) [2024-09-13 13:02:18.116495] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20200][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1378684502016) [2024-09-13 13:02:18.116588] INFO register_pm (ob_page_manager.cpp:40) [20200][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cf6d4340, pm.get_tid()=20200, tenant_id=500) [2024-09-13 13:02:18.116611] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20200][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=94) [2024-09-13 13:02:18.116616] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20200][][T1][Y0-0000000000000000-0-0] [lt=5] new reentrant thread created(idx=2) [2024-09-13 13:02:18.116923] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20201][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1382979469312) [2024-09-13 13:02:18.117012] INFO register_pm (ob_page_manager.cpp:40) [20201][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07cf752340, pm.get_tid()=20201, tenant_id=500) [2024-09-13 13:02:18.117030] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20201][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=95) [2024-09-13 13:02:18.117040] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20201][][T1][Y0-0000000000000000-0-0] [lt=10] new reentrant thread created(idx=3) [2024-09-13 13:02:18.117049] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] finish init mtl80(cost_time_us=1705, type="PN9oceanbase10rootserver23ObTenantTransferServiceE") [2024-09-13 13:02:18.117100] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish init mtl81(cost_time_us=44, type="PN9oceanbase7storage16ObRebuildServiceE") [2024-09-13 13:02:18.117121] INFO [DATA_DICT] init (ob_data_dict_storager.cpp:115) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] data_dict_storager init success(tenant_id=1) [2024-09-13 13:02:18.117131] INFO [DATA_DICT] init (ob_data_dict_service.cpp:101) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] init datadict_service(ret=0, ret="OB_SUCCESS", tenant_id=1) [2024-09-13 13:02:18.117142] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish init mtl82(cost_time_us=32, type="PN9oceanbase8datadict17ObDataDictServiceE") [2024-09-13 13:02:18.117289] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl83(cost_time_us=133, type="PN9oceanbase8observer18ObTableLoadServiceE") [2024-09-13 13:02:18.117307] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl84(cost_time_us=8, type="PN9oceanbase8observer26ObTableLoadResourceServiceE") [2024-09-13 13:02:18.117319] INFO [MVCC] init (ob_multi_version_garbage_collector.cpp:67) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] multi version garbage collector init(this=0x2b07c33f0f80) [2024-09-13 13:02:18.117332] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish init mtl85(cost_time_us=15, type="PN9oceanbase19concurrency_control30ObMultiVersionGarbageCollectorE") [2024-09-13 13:02:18.117346] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=379, tg=0x2b07c7281310, thread_cnt=1, tg->attr_={name:ReqMemEvict, type:3}, tg=0x2b07c7281310) [2024-09-13 13:02:18.117360] INFO init (ob_udr_mgr.cpp:117) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] start tg(tg_id_=379, tg_name=ReqMemEvict) [2024-09-13 13:02:18.117718] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20202][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1387274436608) [2024-09-13 13:02:18.117828] INFO register_pm (ob_page_manager.cpp:40) [20202][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07cf7d0340, pm.get_tid()=20202, tenant_id=500) [2024-09-13 13:02:18.117850] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20202][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=96) [2024-09-13 13:02:18.117912] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ObTimer create success(this=0x2b07c7281330, thread_id=20202, lbt()=0x24edc06b 0x13836960 0x115a4182 0xf3c0bff 0x11a72e88 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.118019] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=16] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:18.118210] INFO run1 (ob_timer.cpp:361) [20202][][T1][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07c7281330, tid=20202, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.119229] INFO [SQL.QRR] init (ob_udr_item_mgr.cpp:239) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=25] init rewrite rule item mapping manager(ret=0) [2024-09-13 13:02:18.119257] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=16] finish init mtl86(cost_time_us=1920, type="PN9oceanbase3sql8ObUDRMgrE") [2024-09-13 13:02:18.119650] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DAD-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.119867] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=380, tg=0x2b07c7281500, thread_cnt=1, tg->attr_={name:ReqMemEvict, type:3}, tg=0x2b07c7281500) [2024-09-13 13:02:18.119907] INFO init (ob_flt_span_mgr.cpp:47) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=37] start tg(tg_id_=380, tg_name=ReqMemEvict) [2024-09-13 13:02:18.120148] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20203][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1391569403904) [2024-09-13 13:02:18.120174] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DAD-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.120276] INFO register_pm (ob_page_manager.cpp:40) [20203][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07cfa56340, pm.get_tid()=20203, tenant_id=500) [2024-09-13 13:02:18.120295] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20203][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=97) [2024-09-13 13:02:18.120342] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07c7281520, thread_id=20203, lbt()=0x24edc06b 0x13836960 0x115a4182 0xc0b4088 0x11a72f12 0xb216e42 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.120355] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish init mtl87(cost_time_us=1089, type="PN9oceanbase3sql12ObFLTSpanMgrE") [2024-09-13 13:02:18.120360] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl88(cost_time_us=0, type="PN9oceanbase5share12ObTestModuleE") [2024-09-13 13:02:18.120367] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=381, tg=0x2b07c72816f0, thread_cnt=2, tg->attr_={name:HeartbeatService, type:1}, tg=0x2b07c72816f0) [2024-09-13 13:02:18.120377] INFO start (ob_tenant_thread_helper.cpp:76) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tg_id_=381, tg_name=HeartbeatService) [2024-09-13 13:02:18.120660] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DAE-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.120664] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20204][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1395864371200) [2024-09-13 13:02:18.120675] INFO run1 (ob_timer.cpp:361) [20203][][T1][Y0-0000000000000000-0-0] [lt=5] timer thread started(this=0x2b07c7281520, tid=20203, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.120788] INFO register_pm (ob_page_manager.cpp:40) [20204][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07cfad4340, pm.get_tid()=20204, tenant_id=500) [2024-09-13 13:02:18.120812] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20204][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=98) [2024-09-13 13:02:18.120819] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20204][][T1][Y0-0000000000000000-0-0] [lt=6] new reentrant thread created(idx=0) [2024-09-13 13:02:18.121057] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20205][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1400159338496) [2024-09-13 13:02:18.121061] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DAE-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.121163] INFO register_pm (ob_page_manager.cpp:40) [20205][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07cfb52340, pm.get_tid()=20205, tenant_id=500) [2024-09-13 13:02:18.121190] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20205][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=99) [2024-09-13 13:02:18.121195] INFO [SHARE] run1 (ob_reentrant_thread.cpp:153) [20205][][T1][Y0-0000000000000000-0-0] [lt=5] new reentrant thread created(idx=1) [2024-09-13 13:02:18.121318] INFO [COMMON] get_tenant_data_version (ob_cluster_version.cpp:291) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] tenant data version fallback to last barrier version(tenant_id=1, data_version=17180000512) [2024-09-13 13:02:18.121345] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl89(cost_time_us=981, type="PN9oceanbase10rootserver18ObHeartbeatServiceE") [2024-09-13 13:02:18.121983] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl90(cost_time_us=626, type="PN9oceanbase6common23ObOptStatMonitorManagerE") [2024-09-13 13:02:18.121999] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] finish init mtl91(cost_time_us=3, type="PN9oceanbase3omt11ObTenantSrsE") [2024-09-13 13:02:18.122013] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] ObSliceAlloc init finished(bsize_=7936, isize_=24, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.122069] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish init mtl92(cost_time_us=66, type="PN9oceanbase5table15ObHTableLockMgrE") [2024-09-13 13:02:18.122089] INFO [SERVER] init (ob_ttl_service.cpp:31) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ttl service: init(ret=0, ret="OB_SUCCESS", tenant_id=1) [2024-09-13 13:02:18.122097] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl93(cost_time_us=19, type="PN9oceanbase5table12ObTTLServiceE") [2024-09-13 13:02:18.122107] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl94(cost_time_us=6, type="PN9oceanbase5table21ObTableApiSessPoolMgrE") [2024-09-13 13:02:18.124391] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DAF-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.124853] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DAF-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.125985] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119ED62FC73-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.128743] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=25] PNIO [ratelimit] time: 1726203738128742, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007611, add_bytes: 0 [2024-09-13 13:02:18.135107] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish init mtl95(cost_time_us=12992, type="PN9oceanbase7storage10checkpoint23ObCheckpointDiagnoseMgrE") [2024-09-13 13:02:18.135143] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] finish init mtl96(cost_time_us=9, type="PN9oceanbase7storage18ObStorageHADiagMgrE") [2024-09-13 13:02:18.135160] INFO [SERVER] init (ob_index_usage_info_mgr.cpp:128) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] index monitoring only for user tenant(tenant_id=1) [2024-09-13 13:02:18.135168] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish init mtl97(cost_time_us=14, type="PN9oceanbase5share19ObIndexUsageInfoMgrE") [2024-09-13 13:02:18.135184] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish init mtl98(cost_time_us=4, type="PN9oceanbase5share25ObResourceLimitCalculatorE") [2024-09-13 13:02:18.135202] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish init mtl99(cost_time_us=13, type="PN9oceanbase5table21ObTableGroupCommitMgrE") [2024-09-13 13:02:18.135211] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish init mtl100(cost_time_us=0, type="PN9oceanbase3sql13ObAuditLoggerE") [2024-09-13 13:02:18.135215] INFO [SHARE] init_mtl_module (ob_tenant_base.cpp:165) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish init mtl101(cost_time_us=0, type="PN9oceanbase3sql17ObAuditLogUpdaterE") [2024-09-13 13:02:18.135224] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:172) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] start_mtl_module(id_=1) [2024-09-13 13:02:18.135255] INFO mtl_start (ob_multi_tenant.cpp:2582) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=29] start tg(tg_id=298, tg_name=TntSharedTimer) [2024-09-13 13:02:18.135531] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20206][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1404454305792) [2024-09-13 13:02:18.135679] INFO register_pm (ob_page_manager.cpp:40) [20206][][T0][Y0-0000000000000000-0-0] [lt=42] register pm finish(ret=0, &pm=0x2b07cfbd0340, pm.get_tid()=20206, tenant_id=500) [2024-09-13 13:02:18.135722] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20206][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=100) [2024-09-13 13:02:18.135733] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTimer create success(this=0x2b07b7591dc0, thread_id=20206, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb20be8d 0x11a83cc0 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.135741] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl1(cost_time_us=488, type="PN9oceanbase3omt13ObSharedTimerE") [2024-09-13 13:02:18.135751] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl2(cost_time_us=0, type="PN9oceanbase3sql21ObTenantSQLSessionMgrE") [2024-09-13 13:02:18.135761] INFO start (ob_tenant_meta_mem_mgr.cpp:274) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] start tg(tg_id_=299, tg_name=TenantMetaMemMgr) [2024-09-13 13:02:18.136033] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20207][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1408749273088) [2024-09-13 13:02:18.136042] INFO run1 (ob_timer.cpp:361) [20206][][T1][Y0-0000000000000000-0-0] [lt=9] timer thread started(this=0x2b07b7591dc0, tid=20206, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.136160] INFO register_pm (ob_page_manager.cpp:40) [20207][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07d0856340, pm.get_tid()=20207, tenant_id=500) [2024-09-13 13:02:18.136191] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20207][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=101) [2024-09-13 13:02:18.136209] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTimer create success(this=0x2b07bf1dd2d0, thread_id=20207, lbt()=0x24edc06b 0x13836960 0x115a4182 0xf5f7521 0x11a83dce 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.136224] INFO [STORAGE] start (ob_tenant_meta_mem_mgr.cpp:285) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] successfully to start t3m's three tasks(ret=0, tg_id_=299) [2024-09-13 13:02:18.136231] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl3(cost_time_us=476, type="PN9oceanbase7storage18ObTenantMetaMemMgrE") [2024-09-13 13:02:18.136243] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] finish start mtl4(cost_time_us=0, type="PN9oceanbase6common18ObServerObjectPoolINS_11transaction14ObPartTransCtxEEE") [2024-09-13 13:02:18.136247] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl5(cost_time_us=0, type="PN9oceanbase6common18ObServerObjectPoolINS_7storage19ObTableScanIteratorEEE") [2024-09-13 13:02:18.136493] INFO run1 (ob_timer.cpp:361) [20207][][T1][Y0-0000000000000000-0-0] [lt=12] timer thread started(this=0x2b07bf1dd2d0, tid=20207, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.136968] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=382, tg=0x2b07c09f5ed0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07c09f5ed0) [2024-09-13 13:02:18.136979] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=382, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.137199] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20208][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1413044240384) [2024-09-13 13:02:18.137302] INFO register_pm (ob_page_manager.cpp:40) [20208][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d08d4340, pm.get_tid()=20208, tenant_id=500) [2024-09-13 13:02:18.137351] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20208][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=102) [2024-09-13 13:02:18.137367] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20208][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=11] io callback thread started [2024-09-13 13:02:18.138019] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=383, tg=0x2b07b5743ec0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07b5743ec0) [2024-09-13 13:02:18.138035] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] start tg(tg_id_=383, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.138234] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20209][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1417339207680) [2024-09-13 13:02:18.138348] INFO register_pm (ob_page_manager.cpp:40) [20209][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d0952340, pm.get_tid()=20209, tenant_id=500) [2024-09-13 13:02:18.138367] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20209][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=103) [2024-09-13 13:02:18.138376] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20209][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=5] io callback thread started [2024-09-13 13:02:18.139148] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=384, tg=0x2b07b5745ec0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07b5745ec0) [2024-09-13 13:02:18.139163] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=15] start tg(tg_id_=384, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.139416] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20210][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1421634174976) [2024-09-13 13:02:18.139538] INFO register_pm (ob_page_manager.cpp:40) [20210][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d09d0340, pm.get_tid()=20210, tenant_id=500) [2024-09-13 13:02:18.139567] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20210][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=104) [2024-09-13 13:02:18.139578] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20210][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=8] io callback thread started [2024-09-13 13:02:18.140395] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=385, tg=0x2b07c095dec0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07c095dec0) [2024-09-13 13:02:18.140413] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] start tg(tg_id_=385, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.140644] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20211][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1425929142272) [2024-09-13 13:02:18.140769] INFO register_pm (ob_page_manager.cpp:40) [20211][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d0e56340, pm.get_tid()=20211, tenant_id=500) [2024-09-13 13:02:18.140796] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20211][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=105) [2024-09-13 13:02:18.140805] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20211][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=6] io callback thread started [2024-09-13 13:02:18.141595] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] create tg succeed(tg_id=386, tg=0x2b07c0987ec0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07c0987ec0) [2024-09-13 13:02:18.141608] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] start tg(tg_id_=386, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.141837] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20212][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1430224109568) [2024-09-13 13:02:18.142005] INFO register_pm (ob_page_manager.cpp:40) [20212][][T0][Y0-0000000000000000-0-0] [lt=26] register pm finish(ret=0, &pm=0x2b07d0ed4340, pm.get_tid()=20212, tenant_id=500) [2024-09-13 13:02:18.142034] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20212][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=106) [2024-09-13 13:02:18.142044] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20212][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=8] io callback thread started [2024-09-13 13:02:18.142702] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=387, tg=0x2b07c0989ec0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07c0989ec0) [2024-09-13 13:02:18.142714] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] start tg(tg_id_=387, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.142925] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20213][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1434519076864) [2024-09-13 13:02:18.143023] INFO register_pm (ob_page_manager.cpp:40) [20213][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d0f52340, pm.get_tid()=20213, tenant_id=500) [2024-09-13 13:02:18.143048] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20213][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=107) [2024-09-13 13:02:18.143058] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20213][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=8] io callback thread started [2024-09-13 13:02:18.143719] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=388, tg=0x2b07c725deb0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07c725deb0) [2024-09-13 13:02:18.143732] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] start tg(tg_id_=388, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.143925] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20214][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1438814044160) [2024-09-13 13:02:18.144019] INFO register_pm (ob_page_manager.cpp:40) [20214][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d0fd0340, pm.get_tid()=20214, tenant_id=500) [2024-09-13 13:02:18.144037] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20214][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=108) [2024-09-13 13:02:18.144043] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20214][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=4] io callback thread started [2024-09-13 13:02:18.144630] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] create tg succeed(tg_id=389, tg=0x2b07c72818b0, thread_cnt=1, tg->attr_={name:IO_CALLBACK, type:2}, tg=0x2b07c72818b0) [2024-09-13 13:02:18.144643] INFO init (ob_io_struct.cpp:2551) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] start tg(tg_id_=389, tg_name=IO_CALLBACK) [2024-09-13 13:02:18.144803] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20215][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1443109011456) [2024-09-13 13:02:18.144895] INFO register_pm (ob_page_manager.cpp:40) [20215][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d1456340, pm.get_tid()=20215, tenant_id=500) [2024-09-13 13:02:18.144914] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20215][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=109) [2024-09-13 13:02:18.144920] INFO [COMMON] run1 (ob_io_struct.cpp:2602) [20215][T1_DiskCB][T1][Y0-0000000000000000-0-0] [lt=4] io callback thread started [2024-09-13 13:02:18.144924] WDIAG [COMMON] adjust_tenant_clock (ob_io_manager.cpp:344) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3][errcode=-4201] get tenant io manager failed(ret=-4201, cur_tenant_id=508) [2024-09-13 13:02:18.144939] WDIAG [COMMON] start (ob_io_manager.cpp:703) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7][errcode=0] adjust tenant clock failed(tmp_ret=-4201) [2024-09-13 13:02:18.144948] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl6(cost_time_us=8694, type="PN9oceanbase6common17ObTenantIOManagerE") [2024-09-13 13:02:18.144980] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl7(cost_time_us=27, type="PN9oceanbase7storage3mds18ObTenantMdsServiceE") [2024-09-13 13:02:18.144991] INFO start (ob_storage_log_writer.cpp:672) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=300, tg_name=StorageLogWriter) [2024-09-13 13:02:18.145188] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20216][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1447403978752) [2024-09-13 13:02:18.145280] INFO register_pm (ob_page_manager.cpp:40) [20216][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d14d4340, pm.get_tid()=20216, tenant_id=500) [2024-09-13 13:02:18.145295] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20216][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=110) [2024-09-13 13:02:18.145295] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl8(cost_time_us=306, type="PN9oceanbase7storage15ObStorageLoggerE") [2024-09-13 13:02:18.145301] INFO [STORAGE.REDO] run1 (ob_storage_log_writer.cpp:698) [20216][][T1][Y0-0000000000000000-0-0] [lt=4] ObSLogWriteRunner run(tg_id_=300, is_inited_=true) [2024-09-13 13:02:18.145302] INFO start (ob_shared_macro_block_manager.cpp:156) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(tg_id_=301, tg_name=SSTableDefragment) [2024-09-13 13:02:18.145499] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20217][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1451698946048) [2024-09-13 13:02:18.145605] INFO register_pm (ob_page_manager.cpp:40) [20217][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07d1552340, pm.get_tid()=20217, tenant_id=500) [2024-09-13 13:02:18.145630] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20217][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=111) [2024-09-13 13:02:18.145664] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTimer create success(this=0x2b07bf1df2d0, thread_id=20217, lbt()=0x24edc06b 0x13836960 0x115a4182 0x1019aed1 0x11a840f8 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.145680] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish start mtl9(cost_time_us=374, type="PN9oceanbase12blocksstable21ObSharedMacroBlockMgrE") [2024-09-13 13:02:18.145685] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl10(cost_time_us=0, type="PN9oceanbase5share19ObSharedMemAllocMgrE") [2024-09-13 13:02:18.145903] INFO run1 (ob_timer.cpp:361) [20217][][T1][Y0-0000000000000000-0-0] [lt=6] timer thread started(this=0x2b07bf1df2d0, tid=20217, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.145900] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20218][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1455993913344) [2024-09-13 13:02:18.146030] INFO register_pm (ob_page_manager.cpp:40) [20218][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d15d0340, pm.get_tid()=20218, tenant_id=500) [2024-09-13 13:02:18.146062] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20218][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=112) [2024-09-13 13:02:18.146396] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20219][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1460288880640) [2024-09-13 13:02:18.146422] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DB0-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.146512] INFO register_pm (ob_page_manager.cpp:40) [20219][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d1856340, pm.get_tid()=20219, tenant_id=500) [2024-09-13 13:02:18.146533] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20219][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=113) [2024-09-13 13:02:18.146534] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimeWheel start success(timer_name="TransTimeWheel") [2024-09-13 13:02:18.146553] INFO [STORAGE.TRANS] start (ob_trans_timer.cpp:209) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] ObTransTimer start success [2024-09-13 13:02:18.146765] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20220][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1464583847936) [2024-09-13 13:02:18.146839] INFO register_pm (ob_page_manager.cpp:40) [20220][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d18d4340, pm.get_tid()=20220, tenant_id=500) [2024-09-13 13:02:18.146860] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20220][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=114) [2024-09-13 13:02:18.146862] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTimeWheel start success(timer_name="DupTbLease") [2024-09-13 13:02:18.146868] INFO [STORAGE.TRANS] start (ob_trans_timer.cpp:209) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTransTimer start success [2024-09-13 13:02:18.146891] INFO [STORAGE.TRANS] start (ob_trans_rpc.cpp:237) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTransRpc start success [2024-09-13 13:02:18.146906] INFO [STORAGE.TRANS] start (ob_gti_rpc.cpp:77) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] gti request rpc start success [2024-09-13 13:02:18.146910] INFO [STORAGE.TRANS] start (ob_gti_source.cpp:79) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] ObGtiSource start success [2024-09-13 13:02:18.146904] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB0-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.146914] INFO [STORAGE.TRANS] start (ob_trans_ctx_mgr_v4.cpp:1737) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] ObTxCtxMgr start success(*this={is_inited_:true, tenant_id_:1, this:0x2b07c3a041b0}) [2024-09-13 13:02:18.146926] INFO [STORAGE.TRANS] start (ob_trans_define_v4.cpp:1528) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] txDescMgr.start(inited_=true, stoped_=false, active_cnt=0) [2024-09-13 13:02:18.146933] INFO [STORAGE.TRANS] start (ob_trans_service.cpp:238) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] transaction service start success(this={is_inited_:true, tenant_id_:1, this:0x2b07c3a04030}) [2024-09-13 13:02:18.146942] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl11(cost_time_us=1253, type="PN9oceanbase11transaction14ObTransServiceE") [2024-09-13 13:02:18.146957] INFO [COORDINATOR] mtl_start (ob_leader_coordinator.cpp:108) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObLeaderCoordinator mtl start success(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.146965] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl12(cost_time_us=18, type="PN9oceanbase10logservice11coordinator19ObLeaderCoordinatorE") [2024-09-13 13:02:18.146981] INFO [COORDINATOR] mtl_start (ob_failure_detector.cpp:97) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObFailureDetector mtl start [2024-09-13 13:02:18.146989] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl13(cost_time_us=20, type="PN9oceanbase10logservice11coordinator17ObFailureDetectorE") [2024-09-13 13:02:18.147342] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB1-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.147527] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20221][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1468878815232) [2024-09-13 13:02:18.147614] INFO register_pm (ob_page_manager.cpp:40) [20221][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d1952340, pm.get_tid()=20221, tenant_id=500) [2024-09-13 13:02:18.147633] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20221][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=115) [2024-09-13 13:02:18.147634] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] simple thread pool init success(name=ApplySrv, thread_num=1, task_num_limit=33792) [2024-09-13 13:02:18.147646] INFO start (ob_log_apply_service.cpp:1101) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start tg(tg_id_=307, tg_name=ApplySrv) [2024-09-13 13:02:18.147654] INFO [CLOG] start (ob_log_apply_service.cpp:1105) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start ObLogApplyService success(ret=0, tg_id_=307) [2024-09-13 13:02:18.147821] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB1-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.148138] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20222][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1473173782528) [2024-09-13 13:02:18.148237] INFO register_pm (ob_page_manager.cpp:40) [20222][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d19d0340, pm.get_tid()=20222, tenant_id=500) [2024-09-13 13:02:18.148261] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] simple thread pool init success(name=ReplaySrv, thread_num=1, task_num_limit=33792) [2024-09-13 13:02:18.148263] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20222][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=116) [2024-09-13 13:02:18.148270] INFO start (ob_log_replay_service.cpp:235) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=308, tg_name=ReplaySrv) [2024-09-13 13:02:18.148274] INFO [COMMON] set_adaptive_strategy (ob_simple_thread_pool.cpp:197) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] set thread pool adaptive strategy success(name_=ReplaySrv, strategy={least_thread_num:8, estimate_ts:200000, expand_rate:90, shrink_rate:75}) [2024-09-13 13:02:18.148289] INFO start (ob_log_replay_service.cpp:79) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] start tg(tg_id_=309, tg_name=ReplayProcessStat) [2024-09-13 13:02:18.148465] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20223][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1477468749824) [2024-09-13 13:02:18.148531] INFO register_pm (ob_page_manager.cpp:40) [20223][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d1a56340, pm.get_tid()=20223, tenant_id=500) [2024-09-13 13:02:18.148550] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20223][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=117) [2024-09-13 13:02:18.148605] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07bf1dfe00, thread_id=20223, lbt()=0x24edc06b 0x13836960 0x115a4182 0x880678e 0x83e0078 0x11a8439b 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.148617] INFO [CLOG] start (ob_log_replay_service.cpp:84) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] ReplayProcessStat start success(tg_id_=309, rp_sv_=0x2b07c24c8470) [2024-09-13 13:02:18.148626] INFO [CLOG] start (ob_log_replay_service.cpp:246) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start ObLogReplayService success(ret=0, tg_id_=308) [2024-09-13 13:02:18.148850] INFO run1 (ob_timer.cpp:361) [20223][][T1][Y0-0000000000000000-0-0] [lt=4] timer thread started(this=0x2b07bf1dfe00, tid=20223, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.151235] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB2-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.151623] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB2-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.154554] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB3-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.155034] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DB3-0-0] [lt=63][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.155269] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB4-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.155638] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB4-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.155997] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20224][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1481763717120) [2024-09-13 13:02:18.156126] INFO register_pm (ob_page_manager.cpp:40) [20224][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07d1ad4340, pm.get_tid()=20224, tenant_id=500) [2024-09-13 13:02:18.156152] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20224][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=118) [2024-09-13 13:02:18.156151] INFO [COMMON] init (ob_simple_thread_pool.cpp:58) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] simple thread pool init success(name=RCSrv, thread_num=1, task_num_limit=1048576) [2024-09-13 13:02:18.156167] INFO start (ob_role_change_service.cpp:166) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start tg(tg_id_=310, tg_name=RCSrv) [2024-09-13 13:02:18.156177] INFO [CLOG] start (ob_role_change_service.cpp:169) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObRoleChangeService start success(ret=0, tg_id_=310) [2024-09-13 13:02:18.170630] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [19930][pnio1][T0][YB42AC103326-00062119D7143DB5-0-0] [lt=14][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203738170322, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62034893, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203738169580}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:18.170682] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB5-0-0] [lt=52][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.171191] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB5-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.173201] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB6-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.173639] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB6-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.182789] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB7-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.183446] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB7-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.195597] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DB8-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.201589] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DB8-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.201971] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DB9-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.202746] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DB9-0-0] [lt=33][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.202973] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DBA-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.203394] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DBA-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.207500] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DBB-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.208019] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DBB-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.213390] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] sock regist: 0x2b07b3e1a270 fd=108 [2024-09-13 13:02:18.213412] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=19] [ussl] accept new connection, fd:108, src_addr:172.16.51.36:53862 [2024-09-13 13:02:18.213431] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] auth mothod is NONE, the fd will be dispatched, fd:108, src_addr:172.16.51.36:53862 [2024-09-13 13:02:18.213459] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=27] PNIO dispatch fd to certain group, fd:108, gid:0x100000000 [2024-09-13 13:02:18.213533] INFO pkts_sk_init (pkts_sk_factory.h:23) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=25] PNIO set pkts_sk_t sock_id s=0x2b07b0a62a58, s->id=65534 [2024-09-13 13:02:18.213548] INFO pkts_sk_new (pkts_sk_factory.h:51) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=14] PNIO sk_new: s=0x2b07b0a62a58 [2024-09-13 13:02:18.213559] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock regist: 0x2b07b0a62a58 fd=108 [2024-09-13 13:02:18.213570] INFO on_accept (listenfd.c:39) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO accept new connection, ns=0x2b07b0a62a58, fd=fd:108:local:"172.16.51.36:53862":remote:"172.16.51.36:53862" [2024-09-13 13:02:18.213598] WDIAG listenfd_handle_event (listenfd.c:71) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=7][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:18.214538] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] sock regist: 0x2b07b3e1a270 fd=109 [2024-09-13 13:02:18.214553] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=14] [ussl] accept new connection, fd:109, src_addr:172.16.51.36:53864 [2024-09-13 13:02:18.214566] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] auth mothod is NONE, the fd will be dispatched, fd:109, src_addr:172.16.51.36:53864 [2024-09-13 13:02:18.214570] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] PNIO dispatch fd to certain group, fd:109, gid:0x100000001 [2024-09-13 13:02:18.214585] INFO pkts_sk_init (pkts_sk_factory.h:23) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=22] PNIO set pkts_sk_t sock_id s=0x2b07b0a63468, s->id=65534 [2024-09-13 13:02:18.214594] INFO pkts_sk_new (pkts_sk_factory.h:51) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO sk_new: s=0x2b07b0a63468 [2024-09-13 13:02:18.214601] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=3] PNIO sock regist: 0x2b07b0a63468 fd=109 [2024-09-13 13:02:18.214608] INFO on_accept (listenfd.c:39) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO accept new connection, ns=0x2b07b0a63468, fd=fd:109:local:"172.16.51.36:53864":remote:"172.16.51.36:53864" [2024-09-13 13:02:18.214646] WDIAG listenfd_handle_event (listenfd.c:71) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=13][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:18.214656] INFO [SHARE] end_run (ob_tenant_base.cpp:332) [20124][T1_ObLogEXTTP0][T0][Y0-0000000000000000-0-0] [lt=6] tenant thread end_run(id_=1, ret=0, thread_count_=117) [2024-09-13 13:02:18.214716] INFO unregister_pm (ob_page_manager.cpp:50) [20124][T1_ObLogEXTTP0][T0][Y0-0000000000000000-0-0] [lt=16] unregister pm finish(&pm=0x2b07c7ad4340, pm.get_tid()=20124) [2024-09-13 13:02:18.214828] INFO [CLOG] resize_ (ob_log_external_storage_handler.cpp:405) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] resize_ success(time_guard=time guard 'resize impl' cost too much time, used=58632, this={concurrency:1, capacity:64, is_running:false, is_inited:true, handle_adapter_:0x2b07c59abe70, this:0x2b07c24cb670}, new_concurrency=0, real_concurrency=0) [2024-09-13 13:02:18.214849] WDIAG [LIB] ~ObTimeGuard (utility.h:890) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=21][errcode=-4389] destruct(*this=time guard 'resize impl' cost too much time, used=58653, time_dist: set thread count=58647) [2024-09-13 13:02:18.214859] INFO start_tenant_tg_ (ob_cdc_service.cpp:683) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] start tg(tg_id_=311, tg_name=CDCSrv) [2024-09-13 13:02:18.214950] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20225][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1486058684416) [2024-09-13 13:02:18.215028] INFO register_pm (ob_page_manager.cpp:40) [20225][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07c7ad4340, pm.get_tid()=20225, tenant_id=500) [2024-09-13 13:02:18.215048] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20225][][T1][Y0-0000000000000000-0-0] [lt=10] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=118) [2024-09-13 13:02:18.215065] INFO [CLOG.EXTLOG] run1 (ob_cdc_service.cpp:202) [20225][T1_CdcSrv][T1][Y0-0000000000000000-0-0] [lt=6] total number of items in ctx map (count=0) [2024-09-13 13:02:18.215084] INFO [CLOG.EXTLOG] resize_log_ext_handler_ (ob_cdc_service.cpp:649) [20225][T1_CdcSrv][T1][Y0-0000000000000000-0-0] [lt=8] finish to resize log external storage handler(current_ts=1726203738215077, tenant_max_cpu=2, valid_ls_v1_count=0, valid_ls_v2_count=0, other_ls_count=0, new_concurrency=0) [2024-09-13 13:02:18.215322] INFO [SHARE] end_run (ob_tenant_base.cpp:332) [20125][T1_ObLogEXTTP0][T0][Y0-0000000000000000-0-0] [lt=6] tenant thread end_run(id_=1, ret=0, thread_count_=117) [2024-09-13 13:02:18.215343] INFO unregister_pm (ob_page_manager.cpp:50) [20125][T1_ObLogEXTTP0][T0][Y0-0000000000000000-0-0] [lt=16] unregister pm finish(&pm=0x2b07c7b52340, pm.get_tid()=20125) [2024-09-13 13:02:18.215398] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DBC-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.215456] INFO [CLOG] resize_ (ob_log_external_storage_handler.cpp:405) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] resize_ success(time_guard=time guard 'resize impl' cost too much time, used=394, this={concurrency:1, capacity:64, is_running:false, is_inited:true, handle_adapter_:0x2b07c67fda70, this:0x2b07c25e5cb0}, new_concurrency=0, real_concurrency=0) [2024-09-13 13:02:18.215527] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20226][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1490353651712) [2024-09-13 13:02:18.215592] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20226][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=118) [2024-09-13 13:02:18.215595] INFO [CLOG] start (ob_remote_fetch_log_worker.cpp:145) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] ObRemoteFetchWorker start succ(tenant_id=1) [2024-09-13 13:02:18.215606] INFO [CLOG] run1 (ob_remote_fetch_log_worker.cpp:218) [20226][][T1][Y0-0000000000000000-0-0] [lt=9] ObRemoteFetchWorker thread start [2024-09-13 13:02:18.215613] INFO [CLOG] do_thread_task_ (ob_remote_fetch_log_worker.cpp:250) [20226][T1_RFLWorker][T1][YB42AC103323-000621F920860C7D-0-0] [lt=4] ObRemoteFetchWorker is running(thread_index=0) [2024-09-13 13:02:18.215828] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20227][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1494648619008) [2024-09-13 13:02:18.215921] INFO register_pm (ob_page_manager.cpp:40) [20227][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07d1b52340, pm.get_tid()=20227, tenant_id=500) [2024-09-13 13:02:18.215949] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20227][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=119) [2024-09-13 13:02:18.215957] INFO [CLOG] run1 (ob_remote_log_writer.cpp:124) [20227][][T1][Y0-0000000000000000-0-0] [lt=7] ObRemoteLogWriter thread start [2024-09-13 13:02:18.215956] INFO [CLOG] start (ob_remote_log_writer.cpp:105) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObRemoteLogWriter start succ [2024-09-13 13:02:18.216159] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20228][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1498943586304) [2024-09-13 13:02:18.216239] INFO register_pm (ob_page_manager.cpp:40) [20228][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d1bd0340, pm.get_tid()=20228, tenant_id=500) [2024-09-13 13:02:18.216259] INFO [CLOG] start (ob_log_restore_service.cpp:128) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] restore service start succ(tenant_id=1) [2024-09-13 13:02:18.216259] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20228][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=120) [2024-09-13 13:02:18.216266] INFO [CLOG] start (ob_log_service.cpp:156) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObLogService is started [2024-09-13 13:02:18.216267] INFO [CLOG] run1 (ob_log_restore_service.cpp:158) [20228][][T1][Y0-0000000000000000-0-0] [lt=6] ObLogRestoreService thread run(tenant_id=1) [2024-09-13 13:02:18.216271] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl14(cost_time_us=69278, type="PN9oceanbase10logservice12ObLogServiceE") [2024-09-13 13:02:18.216387] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DBC-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.216496] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20229][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1503238553600) [2024-09-13 13:02:18.216591] INFO register_pm (ob_page_manager.cpp:40) [20229][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d2a56340, pm.get_tid()=20229, tenant_id=500) [2024-09-13 13:02:18.216609] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20229][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=121) [2024-09-13 13:02:18.216610] INFO [SERVER] start (ob_safe_destroy_handler.cpp:184) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObSafeDestroyHandler start [2024-09-13 13:02:18.216620] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl15(cost_time_us=338, type="PN9oceanbase10logservice18ObGarbageCollectorE") [2024-09-13 13:02:18.216620] INFO [CLOG] run1 (ob_garbage_collector.cpp:1351) [20229][][T1][Y0-0000000000000000-0-0] [lt=5] Garbage Collector start to run [2024-09-13 13:02:18.216633] INFO [STORAGE] start (ob_ls_service.cpp:420) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ls service start successfully [2024-09-13 13:02:18.216630] INFO [CLOG] run1 (ob_garbage_collector.cpp:1358) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=6] Garbage Collector is running(seq_=1, gc_interval=10000000) [2024-09-13 13:02:18.216640] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl16(cost_time_us=12, type="PN9oceanbase7storage11ObLSServiceE") [2024-09-13 13:02:18.216690] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=8] ObSliceAlloc init finished(bsize_=7936, isize_=200, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.216774] INFO [CLOG] gc_check_member_list_ (ob_garbage_collector.cpp:1451) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=11] gc_check_member_list_ cost time(ret=0, time_us=135) [2024-09-13 13:02:18.216788] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1723) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=7] execute_gc cost time(ret=0, time_us=1) [2024-09-13 13:02:18.216799] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1723) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=7] execute_gc cost time(ret=0, time_us=0) [2024-09-13 13:02:18.216971] INFO [STORAGE] replay_new_checkpoint (ob_tenant_checkpoint_slog_handler.cpp:424) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] no ls checkpoint(ret=0) [2024-09-13 13:02:18.216986] INFO [STORAGE] replay_checkpoint (ob_tenant_checkpoint_slog_handler.cpp:375) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish replay tenant checkpoint(ret=0, super_block={tenant_id:1, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}) [2024-09-13 13:02:18.217061] WDIAG [STORAGE.REDO] replay (ob_storage_log_replayer.cpp:147) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12][errcode=0] There is no redo log(replay_start_cursor=ObLogCursor{file_id=1, log_id=1, offset=0}) [2024-09-13 13:02:18.217125] INFO [STORAGE] concurrent_replay_load_tablets (ob_tenant_checkpoint_slog_handler.cpp:783) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish concurrently repaly load tablets(ret=0, total_tablet_cnt=0, cost_time_us=15) [2024-09-13 13:02:18.217170] INFO [STORAGE.REDO] start_log (ob_storage_log_writer.cpp:175) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] slog writer start log(ret=0, start_cursor=ObLogCursor{file_id=1, log_id=1, offset=0}) [2024-09-13 13:02:18.217180] INFO [STORAGE] replay_tenant_slog (ob_tenant_checkpoint_slog_handler.cpp:575) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish replay tenant slog(ret=0, start_point=ObLogCursor{file_id=1, log_id=1, offset=0}, replay_finish_point=ObLogCursor{file_id=1, log_id=1, offset=0}) [2024-09-13 13:02:18.217212] INFO start (ob_tenant_checkpoint_slog_handler.cpp:292) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(tg_id_=312, tg_name=WriteCkpt) [2024-09-13 13:02:18.217462] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20230][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1507533520896) [2024-09-13 13:02:18.217531] INFO register_pm (ob_page_manager.cpp:40) [20230][][T0][Y0-0000000000000000-0-0] [lt=27] register pm finish(ret=0, &pm=0x2b07d2ad4340, pm.get_tid()=20230, tenant_id=500) [2024-09-13 13:02:18.217550] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20230][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=122) [2024-09-13 13:02:18.217585] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ObTimer create success(this=0x2b07c09af110, thread_id=20230, lbt()=0x24edc06b 0x13836960 0x115a4182 0xf858af6 0x11a84536 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.217598] INFO [STORAGE] start (ob_tenant_checkpoint_slog_handler.cpp:299) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish start ObTenantCheckpointSlogHandler(ret=0, tg_id=312, super_block={tenant_id:1, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}) [2024-09-13 13:02:18.217611] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish start mtl17(cost_time_us=959, type="PN9oceanbase7storage29ObTenantCheckpointSlogHandlerE") [2024-09-13 13:02:18.217616] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl18(cost_time_us=0, type="PN9oceanbase10compaction29ObTenantCompactionProgressMgrE") [2024-09-13 13:02:18.217628] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish start mtl19(cost_time_us=0, type="PN9oceanbase10compaction30ObServerCompactionEventHistoryE") [2024-09-13 13:02:18.217632] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl20(cost_time_us=0, type="PN9oceanbase7storage21ObTenantTabletStatMgrE") [2024-09-13 13:02:18.217885] INFO run1 (ob_timer.cpp:361) [20230][][T1][Y0-0000000000000000-0-0] [lt=5] timer thread started(this=0x2b07c09af110, tid=20230, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.217934] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20231][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1511828488192) [2024-09-13 13:02:18.218004] INFO register_pm (ob_page_manager.cpp:40) [20231][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07d2b52340, pm.get_tid()=20231, tenant_id=500) [2024-09-13 13:02:18.218035] INFO [STORAGE.TRANS] start (ob_lock_wait_mgr.cpp:131) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] LockWaitMgr.start(ret=0) [2024-09-13 13:02:18.218036] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20231][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=123) [2024-09-13 13:02:18.218051] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish start mtl21(cost_time_us=411, type="PN9oceanbase8memtable13ObLockWaitMgrE") [2024-09-13 13:02:18.218145] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:66) [20231][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=9] report RowHolderMapper summary info(count=0, bkt_cnt=248) [2024-09-13 13:02:18.218678] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20232][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1516123455488) [2024-09-13 13:02:18.218734] INFO register_pm (ob_page_manager.cpp:40) [20232][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d2bd0340, pm.get_tid()=20232, tenant_id=500) [2024-09-13 13:02:18.218757] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20232][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=124) [2024-09-13 13:02:18.218756] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] init thread success(this=0x2b07baffe030, id=14, ret=0) [2024-09-13 13:02:18.218764] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20232][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4] thread is running function [2024-09-13 13:02:18.219000] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20233][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1520418422784) [2024-09-13 13:02:18.219045] INFO register_pm (ob_page_manager.cpp:40) [20233][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d2c56340, pm.get_tid()=20233, tenant_id=500) [2024-09-13 13:02:18.219061] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] init thread success(this=0x2b07baffe0d0, id=15, ret=0) [2024-09-13 13:02:18.219099] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] init occam thread pool success(ret=0, thread_num=2, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0xf7eada2 0x10d618f6 0xb1fe450 0x11a847e8 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.219109] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:525) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] thread_pool_ init success(thread_pool_={this:0x2b07a0de7a40, block_ptr_.control_ptr:0x2b07c727b790, block_ptr_.data_ptr:0x2b07c727b810}, thread_num_=0, queue_size_square_of_2_=0) [2024-09-13 13:02:18.219416] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20233][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=125) [2024-09-13 13:02:18.219431] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20233][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] thread is running function [2024-09-13 13:02:18.219820] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] TimeWheelBase inited success(precision=100000, start_ticket=17262037382, scan_ticket=17262037382) [2024-09-13 13:02:18.219830] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTimeWheel init success(precision=100000, real_thread_num=1) [2024-09-13 13:02:18.220073] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20234][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1524713390080) [2024-09-13 13:02:18.220143] INFO register_pm (ob_page_manager.cpp:40) [20234][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d2cd4340, pm.get_tid()=20234, tenant_id=500) [2024-09-13 13:02:18.220168] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20234][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=126) [2024-09-13 13:02:18.220167] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimeWheel start success(timer_name="OBJLockGC") [2024-09-13 13:02:18.220174] INFO [OCCAM] init_and_start (ob_occam_timer.h:546) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] init ObOccamTimer success(ret=0) [2024-09-13 13:02:18.220184] INFO [STORAGE.TABLELOCK] start (ob_table_lock_service.cpp:146) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] ObTableLockService::ObOBJLockGarbageCollector starts successfully(ret=0, this={this:0x2b07a0de7a40, last_success_timestamp:0}) [2024-09-13 13:02:18.220208] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=23] finish start mtl22(cost_time_us=2149, type="PN9oceanbase11transaction9tablelock18ObTableLockServiceE") [2024-09-13 13:02:18.220212] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl23(cost_time_us=0, type="PN9oceanbase10rootserver27ObPrimaryMajorFreezeServiceE") [2024-09-13 13:02:18.220216] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl24(cost_time_us=0, type="PN9oceanbase10rootserver27ObRestoreMajorFreezeServiceE") [2024-09-13 13:02:18.220223] INFO start (ob_tenant_meta_checker.cpp:126) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(ls_checker_tg_id_=314, tg_name=LSMetaCh) [2024-09-13 13:02:18.220469] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20235][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1529008357376) [2024-09-13 13:02:18.220551] INFO register_pm (ob_page_manager.cpp:40) [20235][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d2d52340, pm.get_tid()=20235, tenant_id=500) [2024-09-13 13:02:18.220572] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20235][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=127) [2024-09-13 13:02:18.220602] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07c09af4f0, thread_id=20235, lbt()=0x24edc06b 0x13836960 0x115a4182 0xab3acfc 0x11a84986 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.220612] INFO start (ob_tenant_meta_checker.cpp:128) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] start tg(tablet_checker_tg_id_=315, tg_name=TbMetaCh) [2024-09-13 13:02:18.220869] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20236][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1533303324672) [2024-09-13 13:02:18.220980] INFO run1 (ob_timer.cpp:361) [20235][][T1][Y0-0000000000000000-0-0] [lt=8] timer thread started(this=0x2b07c09af4f0, tid=20235, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.221021] INFO register_pm (ob_page_manager.cpp:40) [20236][][T0][Y0-0000000000000000-0-0] [lt=33] register pm finish(ret=0, &pm=0x2b07d2dd0340, pm.get_tid()=20236, tenant_id=500) [2024-09-13 13:02:18.221057] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20236][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=128) [2024-09-13 13:02:18.221064] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c09af6e0, thread_id=20236, lbt()=0x24edc06b 0x13836960 0x115a4182 0xab3ad9c 0x11a84986 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.221097] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DBD-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.221145] INFO [SERVER] start (ob_tenant_meta_checker.cpp:136) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObTenantMetaChecker start success(tenant_id=1, ls_checker_tg_id=314, tablet_checker_tg_id=315) [2024-09-13 13:02:18.221155] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish start mtl25(cost_time_us=932, type="PN9oceanbase8observer19ObTenantMetaCheckerE") [2024-09-13 13:02:18.221226] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] create tg succeed(tg_id=390, tg=0x2b07c728d600, thread_cnt=6, tg->attr_={name:MysqlQueueTh, type:2}, tg=0x2b07c728d600) [2024-09-13 13:02:18.221241] INFO start_mysql_queue (ob_multi_tenant.cpp:399) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start tg(qthread->tg_id_=390, tg_name=MysqlQueueTh) [2024-09-13 13:02:18.221400] INFO run1 (ob_timer.cpp:361) [20236][][T1][Y0-0000000000000000-0-0] [lt=9] timer thread started(this=0x2b07c09af6e0, tid=20236, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.221517] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20237][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1537598291968) [2024-09-13 13:02:18.221619] INFO register_pm (ob_page_manager.cpp:40) [20237][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d2e56340, pm.get_tid()=20237, tenant_id=500) [2024-09-13 13:02:18.221640] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20237][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=129) [2024-09-13 13:02:18.221650] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [20237][T1_MysqlQueueTh][T1][Y0-0000000000000000-0-0] [lt=5] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:18.221659] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20237][T1_MysqlQueueTh][T1][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:18.221708] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DBD-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.221865] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20238][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1541893259264) [2024-09-13 13:02:18.221972] INFO register_pm (ob_page_manager.cpp:40) [20238][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07d2ed4340, pm.get_tid()=20238, tenant_id=500) [2024-09-13 13:02:18.221994] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20238][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=130) [2024-09-13 13:02:18.222001] INFO [RPC.FRAME] onThreadCreated (ob_req_qhandler.cpp:45) [20238][T1_MysqlQueueTh][T1][Y0-0000000000000000-0-0] [lt=6] new task thread create(&translator_=0x55a3869d1210) [2024-09-13 13:02:18.222005] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20238][T1_MysqlQueueTh][T1][Y0-0000000000000000-0-0] [lt=4] Init thread local success [2024-09-13 13:02:18.221994] INFO [SERVER.OMT] start_mysql_queue (ob_multi_tenant.cpp:403) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] tenant mysql_queue mtl_start success(ret=0, tenant_id=1, qthread->tg_id_=390, sql_thread_count=2) [2024-09-13 13:02:18.222011] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=17] finish start mtl26(cost_time_us=842, type="PN9oceanbase8observer11QueueThreadE") [2024-09-13 13:02:18.222017] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl27(cost_time_us=0, type="PN9oceanbase7storage25ObStorageHAHandlerServiceE") [2024-09-13 13:02:18.222025] INFO [SHARE] set_tenant_role (ob_tenant_base.h:537) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] set tenant role(tenant_role_value=1, tenant_role_value_=0) [2024-09-13 13:02:18.222041] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl28(cost_time_us=17, type="PN9oceanbase10rootserver18ObTenantInfoLoaderE") [2024-09-13 13:02:18.222045] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl29(cost_time_us=0, type="PN9oceanbase10rootserver27ObCreateStandbyFromNetActorE") [2024-09-13 13:02:18.222063] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl30(cost_time_us=13, type="PN9oceanbase10rootserver29ObStandbySchemaRefreshTriggerE") [2024-09-13 13:02:18.222075] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl31(cost_time_us=9, type="PN9oceanbase10rootserver20ObLSRecoveryReportorE") [2024-09-13 13:02:18.222084] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish start mtl32(cost_time_us=0, type="PN9oceanbase10rootserver17ObCommonLSServiceE") [2024-09-13 13:02:18.222088] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl33(cost_time_us=0, type="PN9oceanbase10rootserver18ObPrimaryLSServiceE") [2024-09-13 13:02:18.222096] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl34(cost_time_us=0, type="PN9oceanbase10rootserver27ObBalanceTaskExecuteServiceE") [2024-09-13 13:02:18.222103] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl35(cost_time_us=0, type="PN9oceanbase10rootserver19ObRecoveryLSServiceE") [2024-09-13 13:02:18.222106] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl36(cost_time_us=0, type="PN9oceanbase10rootserver16ObRestoreServiceE") [2024-09-13 13:02:18.222115] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl37(cost_time_us=0, type="PN9oceanbase10rootserver22ObTenantBalanceServiceE") [2024-09-13 13:02:18.222120] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl38(cost_time_us=0, type="PN9oceanbase10rootserver21ObBackupTaskSchedulerE") [2024-09-13 13:02:18.222124] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl39(cost_time_us=0, type="PN9oceanbase10rootserver19ObBackupDataServiceE") [2024-09-13 13:02:18.222132] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl40(cost_time_us=0, type="PN9oceanbase10rootserver20ObBackupCleanServiceE") [2024-09-13 13:02:18.222138] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl41(cost_time_us=0, type="PN9oceanbase10rootserver25ObArchiveSchedulerServiceE") [2024-09-13 13:02:18.222142] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl42(cost_time_us=0, type="PN9oceanbase7storage27ObTenantSSTableMergeInfoMgrE") [2024-09-13 13:02:18.222146] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl43(cost_time_us=0, type="PN9oceanbase5share26ObDagWarningHistoryManagerE") [2024-09-13 13:02:18.222150] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl44(cost_time_us=1, type="PN9oceanbase10compaction24ObScheduleSuspectInfoMgrE") [2024-09-13 13:02:18.222161] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish start mtl45(cost_time_us=0, type="PN9oceanbase7storage12ObLobManagerE") [2024-09-13 13:02:18.222165] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl46(cost_time_us=0, type="PN9oceanbase5share22ObGlobalAutoIncServiceE") [2024-09-13 13:02:18.222391] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20239][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1546188226560) [2024-09-13 13:02:18.222481] INFO register_pm (ob_page_manager.cpp:40) [20239][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d2f52340, pm.get_tid()=20239, tenant_id=500) [2024-09-13 13:02:18.222506] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20239][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=131) [2024-09-13 13:02:18.222506] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] ObTimeWheel start success(timer_name="DetectorTimer") [2024-09-13 13:02:18.222721] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20240][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1550483193856) [2024-09-13 13:02:18.222805] INFO register_pm (ob_page_manager.cpp:40) [20240][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07d2fd0340, pm.get_tid()=20240, tenant_id=500) [2024-09-13 13:02:18.222822] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20240][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=132) [2024-09-13 13:02:18.222826] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl47(cost_time_us=653, type="PN9oceanbase5share8detector21ObDeadLockDetectorMgrE") [2024-09-13 13:02:18.222832] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20240][T1_LCLSender][T1][Y0-0000000000000000-0-0] [lt=4] init point thread id with(&point=0x55a3873cd880, idx_=3856, point=[thread id=20240, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20240) [2024-09-13 13:02:18.222838] INFO [STORAGE.TRANS] start (ob_xa_rpc.cpp:767) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] xa rpc start success [2024-09-13 13:02:18.223072] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20241][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1554778161152) [2024-09-13 13:02:18.223167] INFO register_pm (ob_page_manager.cpp:40) [20241][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07d3056340, pm.get_tid()=20241, tenant_id=500) [2024-09-13 13:02:18.223185] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20241][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=133) [2024-09-13 13:02:18.223410] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20242][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1559073128448) [2024-09-13 13:02:18.223517] INFO register_pm (ob_page_manager.cpp:40) [20242][][T0][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07d30d4340, pm.get_tid()=20242, tenant_id=500) [2024-09-13 13:02:18.223535] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObTimeWheel start success(timer_name="XATimeWheel") [2024-09-13 13:02:18.223535] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20242][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=134) [2024-09-13 13:02:18.223540] INFO [STORAGE.TRANS] start (ob_trans_timer.cpp:209) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTransTimer start success [2024-09-13 13:02:18.223762] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20243][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1563368095744) [2024-09-13 13:02:18.223758] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=33] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14131399066, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508]) [2024-09-13 13:02:18.223858] INFO register_pm (ob_page_manager.cpp:40) [20243][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d3152340, pm.get_tid()=20243, tenant_id=500) [2024-09-13 13:02:18.223894] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20243][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=135) [2024-09-13 13:02:18.223896] INFO [STORAGE.TRANS] start (ob_xa_trans_heartbeat_worker.cpp:51) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] XA trans heartbeat worker thread start [2024-09-13 13:02:18.223923] WDIAG [STORAGE.TRANS] xa_scheduler_hb_req (ob_xa_service.cpp:859) [20243][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=9][errcode=0] ObXAService is not running [2024-09-13 13:02:18.223946] WDIAG [STORAGE.TRANS] run1 (ob_xa_trans_heartbeat_worker.cpp:75) [20243][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4250] xa scheduler heartbeat failed(ret=-4250) [2024-09-13 13:02:18.223955] INFO [STORAGE.TRANS] run1 (ob_xa_trans_heartbeat_worker.cpp:84) [20243][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=6] XA scheduler heartbeat task statistics(avg_time=41) [2024-09-13 13:02:18.224126] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20244][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1567663063040) [2024-09-13 13:02:18.224227] INFO register_pm (ob_page_manager.cpp:40) [20244][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d31d0340, pm.get_tid()=20244, tenant_id=500) [2024-09-13 13:02:18.224254] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20244][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=136) [2024-09-13 13:02:18.224258] INFO [STORAGE.TRANS] start (ob_xa_inner_table_gc_worker.cpp:55) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] XA gc worker thread start [2024-09-13 13:02:18.224269] INFO [STORAGE.TRANS] start (ob_xa_service.cpp:121) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] xa service start(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.224277] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl48(cost_time_us=1444, type="PN9oceanbase11transaction11ObXAServiceE") [2024-09-13 13:02:18.224281] INFO [STORAGE.TRANS] start (ob_gts_rpc.cpp:285) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] gts response rpc start success [2024-09-13 13:02:18.224285] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl49(cost_time_us=5, type="PN9oceanbase11transaction18ObTimestampServiceE") [2024-09-13 13:02:18.224292] INFO start (ob_standby_timestamp_service.cpp:74) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] start tg(tg_id_=328, tg_name=StandbyTimestampService) [2024-09-13 13:02:18.224553] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20245][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1571958030336) [2024-09-13 13:02:18.224646] INFO register_pm (ob_page_manager.cpp:40) [20245][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d3256340, pm.get_tid()=20245, tenant_id=500) [2024-09-13 13:02:18.224672] INFO [STORAGE.TRANS] start (ob_gts_rpc.cpp:285) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] gts response rpc start success [2024-09-13 13:02:18.224672] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20245][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=137) [2024-09-13 13:02:18.224680] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl50(cost_time_us=391, type="PN9oceanbase11transaction25ObStandbyTimestampServiceE") [2024-09-13 13:02:18.224684] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl51(cost_time_us=0, type="PN9oceanbase11transaction17ObTimestampAccessE") [2024-09-13 13:02:18.224685] INFO [STORAGE.TRANS] run1 (ob_standby_timestamp_service.cpp:142) [20245][T1_STSWorker][T1][Y0-0000000000000000-0-0] [lt=5] ObStandbyTimestampService thread start(tenant_id=1) [2024-09-13 13:02:18.224688] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl52(cost_time_us=1, type="PN9oceanbase11transaction16ObTransIDServiceE") [2024-09-13 13:02:18.224689] INFO [STORAGE.TRANS] run1 (ob_standby_timestamp_service.cpp:155) [20245][T1_STSWorker][T1][Y0-0000000000000000-0-0] [lt=4] ObStandbyTimestampService thread end(tenant_id=1) [2024-09-13 13:02:18.224693] INFO [SHARE] end_run (ob_tenant_base.cpp:332) [20245][T1_STSWorker][T0][Y0-0000000000000000-0-0] [lt=3] tenant thread end_run(id_=1, ret=0, thread_count_=136) [2024-09-13 13:02:18.224695] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl53(cost_time_us=0, type="PN9oceanbase11transaction17ObUniqueIDServiceE") [2024-09-13 13:02:18.224700] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl54(cost_time_us=0, type="PN9oceanbase3sql17ObPlanBaselineMgrE") [2024-09-13 13:02:18.224711] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] finish start mtl55(cost_time_us=0, type="PN9oceanbase3sql9ObPsCacheE") [2024-09-13 13:02:18.224714] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl56(cost_time_us=0, type="PN9oceanbase3sql11ObPlanCacheE") [2024-09-13 13:02:18.224719] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl57(cost_time_us=1, type="PN9oceanbase6common15ObDetectManagerE") [2024-09-13 13:02:18.224722] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl58(cost_time_us=0, type="PN9oceanbase3sql3dtl11ObTenantDfcE") [2024-09-13 13:02:18.224727] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl59(cost_time_us=1, type="PN9oceanbase3omt9ObPxPoolsE") [2024-09-13 13:02:18.224726] INFO unregister_pm (ob_page_manager.cpp:50) [20245][T1_STSWorker][T0][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07d3256340, pm.get_tid()=20245) [2024-09-13 13:02:18.224734] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl60(cost_time_us=0, type="N9oceanbase3lib6Worker10CompatModeE") [2024-09-13 13:02:18.224746] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] create tg succeed(tg_id=391, tg=0x2b07c728d740, thread_cnt=1, tg->attr_={name:ReqMemEvict, type:3}, tg=0x2b07c728d740) [2024-09-13 13:02:18.224756] INFO start (ob_mysql_request_manager.cpp:107) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] start tg(tg_id_=391, tg_name=ReqMemEvict) [2024-09-13 13:02:18.224991] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20246][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1576252997632) [2024-09-13 13:02:18.225037] INFO register_pm (ob_page_manager.cpp:40) [20246][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d32d4340, pm.get_tid()=20246, tenant_id=500) [2024-09-13 13:02:18.225060] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20246][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=137) [2024-09-13 13:02:18.225090] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c728d760, thread_id=20246, lbt()=0x24edc06b 0x13836960 0x115a4182 0xb32e4f6 0x11a85cae 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.225101] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish start mtl61(cost_time_us=361, type="PN9oceanbase7obmysql21ObMySQLRequestManagerE") [2024-09-13 13:02:18.225106] INFO start (ob_tenant_weak_read_service.cpp:127) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] start tg(tg_id_=330, tg_name=WeakRdSrv) [2024-09-13 13:02:18.225359] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20247][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1580547964928) [2024-09-13 13:02:18.225407] INFO run1 (ob_timer.cpp:361) [20246][][T1][Y0-0000000000000000-0-0] [lt=8] timer thread started(this=0x2b07c728d760, tid=20246, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.225482] INFO register_pm (ob_page_manager.cpp:40) [20247][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d3352340, pm.get_tid()=20247, tenant_id=500) [2024-09-13 13:02:18.225510] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] finish start mtl62(cost_time_us=404, type="PN9oceanbase11transaction23ObTenantWeakReadServiceE") [2024-09-13 13:02:18.225511] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20247][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=138) [2024-09-13 13:02:18.225521] INFO [STORAGE.TRANS] run1 (ob_tenant_weak_read_service.cpp:702) [20247][][T1][Y0-0000000000000000-0-0] [lt=5] [WRS] [TENANT_WEAK_READ_SERVICE] thread start(tenant_id=1) [2024-09-13 13:02:18.225519] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl63(cost_time_us=0, type="PN9oceanbase3sql24ObTenantSqlMemoryManagerE") [2024-09-13 13:02:18.225536] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl64(cost_time_us=9, type="PN9oceanbase3sql3dtl24ObDTLIntermResultManagerE") [2024-09-13 13:02:18.225542] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl65(cost_time_us=0, type="PN9oceanbase3sql21ObPlanMonitorNodeListE") [2024-09-13 13:02:18.225544] WDIAG [PALF] convert_to_ts (scn.cpp:265) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4016] invalid scn should not convert to ts (val_=18446744073709551615) [2024-09-13 13:02:18.225549] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl66(cost_time_us=1, type="PN9oceanbase3sql19ObDataAccessServiceE") [2024-09-13 13:02:18.225559] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl67(cost_time_us=2, type="PN9oceanbase3sql14ObDASIDServiceE") [2024-09-13 13:02:18.225563] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl68(cost_time_us=0, type="PN9oceanbase5share6schema21ObTenantSchemaServiceE") [2024-09-13 13:02:18.225551] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:541) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0}, server_version_delta=1726203738225536, in_cluster_service=false, cluster_version={val:18446744073709551615, v:3}, min_cluster_version={val:18446744073709551615, v:3}, max_cluster_version={val:18446744073709551615, v:3}, get_cluster_version_err=0, cluster_version_delta=-1, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=50000, local_cluster_version={val:0, v:0}, local_cluster_delta=1726203738225536, force_self_check=false, weak_read_refresh_interval=100000) [2024-09-13 13:02:18.225575] INFO [STORAGE] start (ob_tenant_freezer.cpp:138) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] [TenantFreezer] ObTenantFreezer start(tenant_info={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:18.225588] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] finish start mtl69(cost_time_us=21, type="PN9oceanbase7storage15ObTenantFreezerE") [2024-09-13 13:02:18.225602] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=36][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:18.225622] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.225682] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738225588) [2024-09-13 13:02:18.225709] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=0, cluster_heartbeat_interval_=50000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:18.225726] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:18.225720] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C7D-0-0] [lt=9][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203738225638}) [2024-09-13 13:02:18.225741] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:18.225813] INFO alloc_array (ob_dchash.h:415) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14] DCHash: alloc_array: N9oceanbase6common9ObIntWarpE this=0x55a387b5fe80 array=0x2b07c048c030 array_size=65536 prev_array=(nil) [2024-09-13 13:02:18.225818] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20248][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1584842932224) [2024-09-13 13:02:18.225977] INFO register_pm (ob_page_manager.cpp:40) [20248][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07d33d0340, pm.get_tid()=20248, tenant_id=500) [2024-09-13 13:02:18.225999] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20248][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=139) [2024-09-13 13:02:18.226009] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c33d41b0, thread_id=20248, lbt()=0x24edc06b 0x13836960 0xf7c027b 0x11a86188 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.226309] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20249][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1589137899520) [2024-09-13 13:02:18.226344] INFO run1 (ob_timer.cpp:361) [20248][][T1][Y0-0000000000000000-0-0] [lt=5] timer thread started(this=0x2b07c33d41b0, tid=20248, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.226475] INFO register_pm (ob_page_manager.cpp:40) [20249][][T0][Y0-0000000000000000-0-0] [lt=23] register pm finish(ret=0, &pm=0x2b07d3456340, pm.get_tid()=20249, tenant_id=500) [2024-09-13 13:02:18.226512] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20249][][T1][Y0-0000000000000000-0-0] [lt=21] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=140) [2024-09-13 13:02:18.226523] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] ObTimer create success(this=0x2b07c33d42b0, thread_id=20249, lbt()=0x24edc06b 0x13836960 0xf7c0331 0x11a86188 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.226813] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20250][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1593432866816) [2024-09-13 13:02:18.226938] INFO register_pm (ob_page_manager.cpp:40) [20250][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07d34d4340, pm.get_tid()=20250, tenant_id=500) [2024-09-13 13:02:18.226963] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20250][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=141) [2024-09-13 13:02:18.226968] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObTimer create success(this=0x2b07c33d43b0, thread_id=20250, lbt()=0x24edc06b 0x13836960 0xf7c03ec 0x11a86188 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.226974] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl70(cost_time_us=1382, type="PN9oceanbase7storage10checkpoint19ObCheckPointServiceE") [2024-09-13 13:02:18.227115] INFO run1 (ob_timer.cpp:361) [20249][][T1][Y0-0000000000000000-0-0] [lt=12] timer thread started(this=0x2b07c33d42b0, tid=20249, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.227207] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20251][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1597727834112) [2024-09-13 13:02:18.227377] INFO register_pm (ob_page_manager.cpp:40) [20251][][T0][Y0-0000000000000000-0-0] [lt=34] register pm finish(ret=0, &pm=0x2b07d3552340, pm.get_tid()=20251, tenant_id=500) [2024-09-13 13:02:18.227406] INFO run1 (ob_timer.cpp:361) [20250][][T1][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07c33d43b0, tid=20250, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.227422] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20251][][T1][Y0-0000000000000000-0-0] [lt=22] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=142) [2024-09-13 13:02:18.227429] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c33d4600, thread_id=-1, lbt()=0x24edc06b 0x13836960 0xf802d1a 0x11a86212 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.227690] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20252][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1602022801408) [2024-09-13 13:02:18.227821] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DBE-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.227837] INFO register_pm (ob_page_manager.cpp:40) [20252][][T0][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07d35d0340, pm.get_tid()=20252, tenant_id=500) [2024-09-13 13:02:18.227867] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20252][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=143) [2024-09-13 13:02:18.227873] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=20] ObTimer create success(this=0x2b07c33d4720, thread_id=-1, lbt()=0x24edc06b 0x13836960 0xf802d9f 0x11a86212 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.227889] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish start mtl71(cost_time_us=910, type="PN9oceanbase7storage10checkpoint17ObTabletGCServiceE") [2024-09-13 13:02:18.227926] INFO run1 (ob_timer.cpp:361) [20251][][T1][Y0-0000000000000000-0-0] [lt=108] timer thread started(this=0x2b07c33d4600, tid=20251, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.228228] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20253][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1606317768704) [2024-09-13 13:02:18.228266] INFO run1 (ob_timer.cpp:361) [20252][][T1][Y0-0000000000000000-0-0] [lt=21] timer thread started(this=0x2b07c33d4720, tid=20252, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.228328] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DBE-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.228378] INFO register_pm (ob_page_manager.cpp:40) [20253][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d3856340, pm.get_tid()=20253, tenant_id=500) [2024-09-13 13:02:18.228411] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20253][][T1][Y0-0000000000000000-0-0] [lt=24] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=144) [2024-09-13 13:02:18.228417] INFO [ARCHIVE] start (ob_ls_mgr.cpp:194) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObArchiveLSMgr start succ [2024-09-13 13:02:18.228452] INFO [ARCHIVE] run1 (ob_ls_mgr.cpp:319) [20253][][T1][Y0-0000000000000000-0-0] [lt=12] ObArchiveLSMgr thread start [2024-09-13 13:02:18.228486] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=9] gc stale ls task succ [2024-09-13 13:02:18.228515] INFO [STORAGE.TRANS] init (ob_gts_task_queue.cpp:40) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] gts task queue init success(this=0x2b07c058e0b0, type=0) [2024-09-13 13:02:18.228533] INFO [STORAGE.TRANS] init (ob_gts_task_queue.cpp:40) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] gts task queue init success(this=0x2b07c059e170, type=1) [2024-09-13 13:02:18.228542] INFO [STORAGE.TRANS] init (ob_gts_source.cpp:137) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7] gts source init success(tenant_id=1, server="172.16.51.35:2882", this=0x2b07c058e070) [2024-09-13 13:02:18.228558] INFO [STORAGE.TRANS] init (ob_ts_mgr.cpp:56) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] ts source info init success(tenant_id=1) [2024-09-13 13:02:18.228616] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20254][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1610612736000) [2024-09-13 13:02:18.228620] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12] ObSliceAlloc init finished(bsize_=7936, isize_=72, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.228722] INFO register_pm (ob_page_manager.cpp:40) [20254][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d38d4340, pm.get_tid()=20254, tenant_id=500) [2024-09-13 13:02:18.228741] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20254][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=145) [2024-09-13 13:02:18.228743] INFO [ARCHIVE] start (ob_archive_sequencer.cpp:102) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObArchiveSequencer start succ [2024-09-13 13:02:18.228747] INFO [ARCHIVE] run1 (ob_archive_sequencer.cpp:132) [20254][][T1][Y0-0000000000000000-0-0] [lt=5] ObArchiveSequencer thread start [2024-09-13 13:02:18.228754] INFO [ARCHIVE] produce_log_fetch_task_ (ob_archive_sequencer.cpp:174) [20254][T1_ArcSeq][T1][YB42AC103323-000621F920D60C7D-0-0] [lt=4] archive round not in doing status, just skip(key={incarnation:-1, dest_id:-1, round:-1}, state={status:"INVALID"}) [2024-09-13 13:02:18.228943] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20255][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1614907703296) [2024-09-13 13:02:18.228950] INFO [STORAGE.TRANS] add_tenant_ (ob_ts_mgr.cpp:1294) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] ts source add tenant success(tenant_id=1, server="172.16.51.35:2882", timeguard=time guard 'add ts tenant' cost too much time, used=615, lbt()="0x24edc06b 0xfff9977 0xfff7ae0 0x24c269ca 0x252d5fab 0x252d7128 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.228986] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.229027] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=41][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.229036] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738225755) [2024-09-13 13:02:18.229064] INFO register_pm (ob_page_manager.cpp:40) [20255][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d3952340, pm.get_tid()=20255, tenant_id=500) [2024-09-13 13:02:18.229087] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20255][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=146) [2024-09-13 13:02:18.229092] INFO [ARCHIVE] start (ob_archive_fetcher.cpp:159) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] ObArchiveFetcher start succ [2024-09-13 13:02:18.229105] INFO [ARCHIVE] run1 (ob_archive_fetcher.cpp:280) [20255][][T1][Y0-0000000000000000-0-0] [lt=10] ObArchiveFetcher thread start [2024-09-13 13:02:18.229115] INFO [ARCHIVE] do_thread_task_ (ob_archive_fetcher.cpp:312) [20255][T1_ArcFetcher][T1][YB42AC103323-000621F920E60C7D-0-0] [lt=7] ObArchiveFetcher is running(thread_index=0) [2024-09-13 13:02:18.229326] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20256][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1619202670592) [2024-09-13 13:02:18.229419] INFO register_pm (ob_page_manager.cpp:40) [20256][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d39d0340, pm.get_tid()=20256, tenant_id=500) [2024-09-13 13:02:18.229462] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20256][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=147) [2024-09-13 13:02:18.229467] INFO [ARCHIVE] start (ob_archive_sender.cpp:126) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] start ObArchiveSender threads succ(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.229475] INFO [ARCHIVE] run1 (ob_archive_sender.cpp:231) [20256][][T1][Y0-0000000000000000-0-0] [lt=8] ObArchiveSender thread start(tenant_id=1) [2024-09-13 13:02:18.229701] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20257][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1623497637888) [2024-09-13 13:02:18.229804] INFO register_pm (ob_page_manager.cpp:40) [20257][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d3a56340, pm.get_tid()=20257, tenant_id=500) [2024-09-13 13:02:18.229829] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20257][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=148) [2024-09-13 13:02:18.229828] INFO [ARCHIVE] start (ob_archive_timer.cpp:83) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] ObArchiveTimer start succ(tenant_id_=1) [2024-09-13 13:02:18.229839] INFO [ARCHIVE] run1 (ob_archive_timer.cpp:102) [20257][][T1][Y0-0000000000000000-0-0] [lt=7] ObArchiveTimer thread start(tenant_id_=1) [2024-09-13 13:02:18.229847] INFO [ARCHIVE] do_thread_task_ (ob_archive_timer.cpp:130) [20257][T1_ArcTimer][T1][YB42AC103323-000621F921060C7D-0-0] [lt=5] archive round not in doing status, just skip(key={incarnation:-1, dest_id:-1, round:-1}, state={status:"INVALID"}) [2024-09-13 13:02:18.230071] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20258][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1627792605184) [2024-09-13 13:02:18.230169] INFO register_pm (ob_page_manager.cpp:40) [20258][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d3ad4340, pm.get_tid()=20258, tenant_id=500) [2024-09-13 13:02:18.230213] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20258][][T1][Y0-0000000000000000-0-0] [lt=33] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=149) [2024-09-13 13:02:18.230217] INFO [ARCHIVE] start (ob_archive_service.cpp:130) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] archive service start succ(tenant_id=1) [2024-09-13 13:02:18.230232] INFO [ARCHIVE] run1 (ob_archive_service.cpp:212) [20258][][T1][Y0-0000000000000000-0-0] [lt=10] ObArchiveService thread start(tenant_id=1) [2024-09-13 13:02:18.230228] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl72(cost_time_us=2330, type="PN9oceanbase7archive16ObArchiveServiceE") [2024-09-13 13:02:18.230259] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] create tg succeed(tg_id=392, tg=0x2b07c728d930, thread_cnt=1, tg->attr_={name:MergeLoop, type:3}, tg=0x2b07c728d930) [2024-09-13 13:02:18.230274] INFO start (ob_tenant_tablet_scheduler.cpp:363) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] start tg(merge_loop_tg_id_=392, tg_name=MergeLoop) [2024-09-13 13:02:18.230523] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20259][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1632087572480) [2024-09-13 13:02:18.230620] INFO register_pm (ob_page_manager.cpp:40) [20259][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d3b52340, pm.get_tid()=20259, tenant_id=500) [2024-09-13 13:02:18.230649] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20259][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=150) [2024-09-13 13:02:18.230681] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] ObTimer create success(this=0x2b07c728d950, thread_id=20259, lbt()=0x24edc06b 0x13836960 0x115a4182 0x10ad2002 0x11a86326 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.230694] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] create tg succeed(tg_id=393, tg=0x2b07c728db20, thread_cnt=1, tg->attr_={name:MediumLoop, type:3}, tg=0x2b07c728db20) [2024-09-13 13:02:18.230700] INFO start (ob_tenant_tablet_scheduler.cpp:369) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(medium_loop_tg_id_=393, tg_name=MediumLoop) [2024-09-13 13:02:18.230964] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20260][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1636382539776) [2024-09-13 13:02:18.230987] INFO run1 (ob_timer.cpp:361) [20259][][T1][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07c728d950, tid=20259, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.231056] INFO register_pm (ob_page_manager.cpp:40) [20260][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d3bd0340, pm.get_tid()=20260, tenant_id=500) [2024-09-13 13:02:18.231075] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20260][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=151) [2024-09-13 13:02:18.231081] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x2b07c728db40, thread_id=20260, lbt()=0x24edc06b 0x13836960 0x115a4182 0x10ad2124 0x11a86326 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.231092] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] create tg succeed(tg_id=394, tg=0x2b07c728dd10, thread_cnt=1, tg->attr_={name:SSTableGC, type:3}, tg=0x2b07c728dd10) [2024-09-13 13:02:18.231099] INFO start (ob_tenant_tablet_scheduler.cpp:375) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] start tg(sstable_gc_tg_id_=394, tg_name=SSTableGC) [2024-09-13 13:02:18.231280] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20261][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1640677507072) [2024-09-13 13:02:18.231311] INFO run1 (ob_timer.cpp:361) [20260][][T1][Y0-0000000000000000-0-0] [lt=4] timer thread started(this=0x2b07c728db40, tid=20260, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.231357] INFO register_pm (ob_page_manager.cpp:40) [20261][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d3c56340, pm.get_tid()=20261, tenant_id=500) [2024-09-13 13:02:18.231378] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20261][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=152) [2024-09-13 13:02:18.231389] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c728dd30, thread_id=20261, lbt()=0x24edc06b 0x13836960 0x115a4182 0x10ad2246 0x11a86326 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.231404] INFO create_tg_tenant (thread_mgr.h:1043) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] create tg succeed(tg_id=395, tg=0x2b07c7281d50, thread_cnt=1, tg->attr_={name:InfoPoolResize, type:3}, tg=0x2b07c7281d50) [2024-09-13 13:02:18.231412] INFO start (ob_tenant_tablet_scheduler.cpp:381) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] start tg(info_pool_resize_tg_id_=395, tg_name=InfoPoolResize) [2024-09-13 13:02:18.231575] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20262][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1644972474368) [2024-09-13 13:02:18.231604] INFO run1 (ob_timer.cpp:361) [20261][][T1][Y0-0000000000000000-0-0] [lt=7] timer thread started(this=0x2b07c728dd30, tid=20261, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.231686] INFO register_pm (ob_page_manager.cpp:40) [20262][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d3cd4340, pm.get_tid()=20262, tenant_id=500) [2024-09-13 13:02:18.231706] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20262][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=153) [2024-09-13 13:02:18.231713] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c7281d70, thread_id=20262, lbt()=0x24edc06b 0x13836960 0x115a4182 0x10ad2369 0x11a86326 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.231721] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl73(cost_time_us=1470, type="PN9oceanbase7storage23ObTenantTabletSchedulerE") [2024-09-13 13:02:18.231725] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl74(cost_time_us=0, type="PN9oceanbase5share20ObTenantDagSchedulerE") [2024-09-13 13:02:18.231926] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20263][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1649267441664) [2024-09-13 13:02:18.232016] INFO register_pm (ob_page_manager.cpp:40) [20263][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d3d52340, pm.get_tid()=20263, tenant_id=500) [2024-09-13 13:02:18.232036] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20263][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=154) [2024-09-13 13:02:18.232036] INFO [COMMON] start (ob_storage_ha_service.cpp:118) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObStorageHAService start [2024-09-13 13:02:18.232042] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl75(cost_time_us=313, type="PN9oceanbase7storage18ObStorageHAServiceE") [2024-09-13 13:02:18.232050] INFO run1 (ob_timer.cpp:361) [20262][][T1][Y0-0000000000000000-0-0] [lt=5] timer thread started(this=0x2b07c7281d70, tid=20262, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.232060] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl76(cost_time_us=14, type="PN9oceanbase7storage21ObTenantFreezeInfoMgrE") [2024-09-13 13:02:18.232056] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=6] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:18.232065] INFO [STORAGE.TRANS] start (ob_tx_loop_worker.cpp:52) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] [Tx Loop Worker] start [2024-09-13 13:02:18.232275] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20264][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1653562408960) [2024-09-13 13:02:18.232370] INFO register_pm (ob_page_manager.cpp:40) [20264][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d3dd0340, pm.get_tid()=20264, tenant_id=500) [2024-09-13 13:02:18.232395] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20264][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=155) [2024-09-13 13:02:18.232395] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl77(cost_time_us=330, type="PN9oceanbase11transaction14ObTxLoopWorkerE") [2024-09-13 13:02:18.232405] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] finish start mtl78(cost_time_us=0, type="PN9oceanbase7storage15ObAccessServiceE") [2024-09-13 13:02:18.232405] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:104) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=7] tx gc loop thread is running(MTL_ID()=1) [2024-09-13 13:02:18.232418] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:111) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=12] try gc retain ctx [2024-09-13 13:02:18.232658] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20265][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1657857376256) [2024-09-13 13:02:18.232750] INFO register_pm (ob_page_manager.cpp:40) [20265][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d3e56340, pm.get_tid()=20265, tenant_id=500) [2024-09-13 13:02:18.232772] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20265][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=156) [2024-09-13 13:02:18.232772] INFO [COMMON] start (ob_transfer_service.cpp:117) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTransferService start [2024-09-13 13:02:18.232778] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl79(cost_time_us=368, type="PN9oceanbase7storage17ObTransferServiceE") [2024-09-13 13:02:18.232790] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] finish start mtl80(cost_time_us=0, type="PN9oceanbase10rootserver23ObTenantTransferServiceE") [2024-09-13 13:02:18.232812] INFO [STORAGE] scheduler_transfer_handler_ (ob_transfer_service.cpp:202) [20265][T1_TransferServ][T1][Y0-0000000000000000-0-0] [lt=4] start do transfer handler(ls_id_array_=[]) [2024-09-13 13:02:18.233032] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20266][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1662152343552) [2024-09-13 13:02:18.233123] INFO register_pm (ob_page_manager.cpp:40) [20266][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d3ed4340, pm.get_tid()=20266, tenant_id=500) [2024-09-13 13:02:18.233143] INFO [COMMON] start (ob_rebuild_service.cpp:409) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObRebuildService start [2024-09-13 13:02:18.233148] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20266][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=157) [2024-09-13 13:02:18.233151] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl81(cost_time_us=357, type="PN9oceanbase7storage16ObRebuildServiceE") [2024-09-13 13:02:18.233156] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl82(cost_time_us=1, type="PN9oceanbase8datadict17ObDataDictServiceE") [2024-09-13 13:02:18.233381] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20267][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1666447310848) [2024-09-13 13:02:18.233460] INFO register_pm (ob_page_manager.cpp:40) [20267][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d3f52340, pm.get_tid()=20267, tenant_id=500) [2024-09-13 13:02:18.233478] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20267][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=158) [2024-09-13 13:02:18.233511] INFO create (ob_timer.cpp:72) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] ObTimer create success(this=0x2b07c33f0b70, thread_id=20267, lbt()=0x24edc06b 0x13836960 0xac850b7 0x11a8688a 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.233522] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl83(cost_time_us=361, type="PN9oceanbase8observer18ObTableLoadServiceE") [2024-09-13 13:02:18.233526] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl84(cost_time_us=0, type="PN9oceanbase8observer26ObTableLoadResourceServiceE") [2024-09-13 13:02:18.233899] INFO run1 (ob_timer.cpp:361) [20267][][T1][Y0-0000000000000000-0-0] [lt=4] timer thread started(this=0x2b07c33f0b70, tid=20267, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.234123] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20268][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1670742278144) [2024-09-13 13:02:18.234248] INFO register_pm (ob_page_manager.cpp:40) [20268][][T0][Y0-0000000000000000-0-0] [lt=28] register pm finish(ret=0, &pm=0x2b07d3fd0340, pm.get_tid()=20268, tenant_id=500) [2024-09-13 13:02:18.234268] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20268][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=159) [2024-09-13 13:02:18.234268] INFO [OCCAM] init_and_start (ob_occam_thread_pool.h:104) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] init thread success(this=0x2b07baffe9f0, id=16, ret=0) [2024-09-13 13:02:18.234277] INFO [OCCAM] run1 (ob_occam_thread_pool.h:127) [20268][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] thread is running function [2024-09-13 13:02:18.234292] INFO [OCCAM] init (ob_occam_thread_pool.h:248) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] init occam thread pool success(ret=0, thread_num=1, queue_size_square_of_2=10, lbt()="0x24edc06b 0x8359ee6 0x8358cc3 0x8215155 0xf634929 0x11a8699e 0xb216e55 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.234842] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:111) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] TimeWheelBase inited success(precision=1000000, start_ticket=1726203738, scan_ticket=1726203738) [2024-09-13 13:02:18.234854] INFO [STORAGE.TRANS] init (ob_time_wheel.cpp:371) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=11] ObTimeWheel init success(precision=1000000, real_thread_num=1) [2024-09-13 13:02:18.235040] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20269][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1675037245440) [2024-09-13 13:02:18.235133] INFO register_pm (ob_page_manager.cpp:40) [20269][][T0][Y0-0000000000000000-0-0] [lt=21] register pm finish(ret=0, &pm=0x2b07d4656340, pm.get_tid()=20269, tenant_id=500) [2024-09-13 13:02:18.235157] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20269][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=160) [2024-09-13 13:02:18.235157] INFO [STORAGE.TRANS] start (ob_time_wheel.cpp:416) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] ObTimeWheel start success(timer_name="MultiVersionGC") [2024-09-13 13:02:18.235164] INFO [OCCAM] init_and_start (ob_occam_timer.h:570) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] init ObOccamTimer success(ret=0) [2024-09-13 13:02:18.235176] INFO [MVCC] start (ob_multi_version_garbage_collector.cpp:120) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] multi version garbage collector start(this={this:0x2b07c33f0f80, last_study_timestamp:0, last_refresh_timestamp:0, last_reclaim_timestamp:0, last_sstable_overflow_timestamp:0, has_error_when_study:false, refresh_error_too_long:false, has_error_when_reclaim:false, gc_is_disabled:false, global_reserved_snapshot:{val:0, v:0}, is_inited:true}, GARBAGE_COLLECT_RETRY_INTERVAL=60000000, GARBAGE_COLLECT_EXEC_INTERVAL=600000000, GARBAGE_COLLECT_PRECISION=1000000, GARBAGE_COLLECT_RECLAIM_DURATION=1800000000) [2024-09-13 13:02:18.235200] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=20] finish start mtl85(cost_time_us=1667, type="PN9oceanbase19concurrency_control30ObMultiVersionGarbageCollectorE") [2024-09-13 13:02:18.235208] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] finish start mtl86(cost_time_us=0, type="PN9oceanbase3sql8ObUDRMgrE") [2024-09-13 13:02:18.235212] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl87(cost_time_us=0, type="PN9oceanbase3sql12ObFLTSpanMgrE") [2024-09-13 13:02:18.235216] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl88(cost_time_us=0, type="PN9oceanbase5share12ObTestModuleE") [2024-09-13 13:02:18.235220] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl89(cost_time_us=0, type="PN9oceanbase10rootserver18ObHeartbeatServiceE") [2024-09-13 13:02:18.235226] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl90(cost_time_us=3, type="PN9oceanbase6common23ObOptStatMonitorManagerE") [2024-09-13 13:02:18.235241] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl91(cost_time_us=8, type="PN9oceanbase3omt11ObTenantSrsE") [2024-09-13 13:02:18.235249] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] finish start mtl92(cost_time_us=0, type="PN9oceanbase5table15ObHTableLockMgrE") [2024-09-13 13:02:18.235253] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl93(cost_time_us=0, type="PN9oceanbase5table12ObTTLServiceE") [2024-09-13 13:02:18.235258] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl94(cost_time_us=1, type="PN9oceanbase5table21ObTableApiSessPoolMgrE") [2024-09-13 13:02:18.235262] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl95(cost_time_us=1, type="PN9oceanbase7storage10checkpoint23ObCheckpointDiagnoseMgrE") [2024-09-13 13:02:18.235266] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl96(cost_time_us=0, type="PN9oceanbase7storage18ObStorageHADiagMgrE") [2024-09-13 13:02:18.235270] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl97(cost_time_us=1, type="PN9oceanbase5share19ObIndexUsageInfoMgrE") [2024-09-13 13:02:18.235278] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl98(cost_time_us=0, type="PN9oceanbase5share25ObResourceLimitCalculatorE") [2024-09-13 13:02:18.235284] INFO [SERVER] start (ob_table_tenant_group.cpp:272) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] successfully to start ObTableGroupCommitMgr [2024-09-13 13:02:18.235289] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] finish start mtl99(cost_time_us=6, type="PN9oceanbase5table21ObTableGroupCommitMgrE") [2024-09-13 13:02:18.235292] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] finish start mtl100(cost_time_us=0, type="PN9oceanbase3sql13ObAuditLoggerE") [2024-09-13 13:02:18.235297] INFO [SHARE] start_mtl_module (ob_tenant_base.cpp:185) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] finish start mtl101(cost_time_us=0, type="PN9oceanbase3sql17ObAuditLogUpdaterE") [2024-09-13 13:02:18.235445] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20270][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1679332212736) [2024-09-13 13:02:18.235514] INFO register_pm (ob_page_manager.cpp:40) [20270][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d46d4340, pm.get_tid()=20270, tenant_id=500) [2024-09-13 13:02:18.235538] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20270][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=161) [2024-09-13 13:02:18.235672] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20271][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1683627180032) [2024-09-13 13:02:18.235743] INFO register_pm (ob_page_manager.cpp:40) [20271][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07d4752340, pm.get_tid()=20271, tenant_id=500) [2024-09-13 13:02:18.235762] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20271][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=162) [2024-09-13 13:02:18.235926] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20272][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1687922147328) [2024-09-13 13:02:18.236046] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:18.236051] INFO register_pm (ob_page_manager.cpp:40) [20272][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d47d0340, pm.get_tid()=20272, tenant_id=500) [2024-09-13 13:02:18.236063] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:18.236068] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20272][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=163) [2024-09-13 13:02:18.236070] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:18.236079] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:18.236251] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20273][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1692217114624) [2024-09-13 13:02:18.236345] INFO register_pm (ob_page_manager.cpp:40) [20273][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d4856340, pm.get_tid()=20273, tenant_id=500) [2024-09-13 13:02:18.236368] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20273][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=164) [2024-09-13 13:02:18.236602] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20274][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1696512081920) [2024-09-13 13:02:18.236781] INFO register_pm (ob_page_manager.cpp:40) [20274][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d48d4340, pm.get_tid()=20274, tenant_id=500) [2024-09-13 13:02:18.236803] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20274][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=165) [2024-09-13 13:02:18.237018] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20275][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1700807049216) [2024-09-13 13:02:18.237427] INFO register_pm (ob_page_manager.cpp:40) [20275][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d4952340, pm.get_tid()=20275, tenant_id=500) [2024-09-13 13:02:18.237450] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20275][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=166) [2024-09-13 13:02:18.237661] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20276][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1705102016512) [2024-09-13 13:02:18.237753] INFO register_pm (ob_page_manager.cpp:40) [20276][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d49d0340, pm.get_tid()=20276, tenant_id=500) [2024-09-13 13:02:18.237775] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20276][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=167) [2024-09-13 13:02:18.237780] INFO [SHARE] update_thread_cnt (ob_tenant_base.cpp:440) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=3] update_thread_cnt(tenant_unit_cpu=2.000000000000000000e+00, old_thread_count=160, new_thread_count=167) [2024-09-13 13:02:18.237804] INFO [SERVER.OMT] create_tenant_module (ob_tenant.cpp:1012) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] finish create mtl module>>>>(tenant_id=1, MTL_ID()=1, ret=0) [2024-09-13 13:02:18.237833] INFO alloc_array (ob_dchash.h:415) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] DCHash: alloc_array: N9oceanbase6common9ObIntWarpE this=0x55a3879f7c00 array=0x2b07c72a8030 array_size=65536 prev_array=(nil) [2024-09-13 13:02:18.239377] INFO [SQL.ENG] add_tenant (ob_px_target_mgr.cpp:197) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] px res info add tenant success(tenant_id=1, server_="172.16.51.35:2882", timeguard=time guard 'add px tenant' cost too much time, used=1566, lbt()="0x24edc06b 0xe056d4d 0xe055a5c 0xb216f46 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.239429] INFO alloc_array (ob_dchash.h:415) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=21] DCHash: alloc_array: N9oceanbase6common9ObIntWarpE this=0x55a38b360e00 array=0x2b07d4a04030 array_size=65536 prev_array=(nil) [2024-09-13 13:02:18.241251] INFO [SHARE] add_tenant (ob_resource_col_mapping_rule_manager.cpp:248) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] add resource column mapping rule info(ret=0, tenant_id=1) [2024-09-13 13:02:18.241515] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20277][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1709396983808) [2024-09-13 13:02:18.241619] INFO register_pm (ob_page_manager.cpp:40) [20277][][T0][Y0-0000000000000000-0-0] [lt=25] register pm finish(ret=0, &pm=0x2b07d4c56340, pm.get_tid()=20277, tenant_id=500) [2024-09-13 13:02:18.241648] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20277][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=168) [2024-09-13 13:02:18.241660] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20277][][T1][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:18.241672] INFO unregister_pm (ob_page_manager.cpp:50) [20277][][T1][Y0-0000000000000000-0-0] [lt=9] unregister pm finish(&pm=0x2b07d4c56340, pm.get_tid()=20277) [2024-09-13 13:02:18.241690] INFO register_pm (ob_page_manager.cpp:40) [20277][][T1][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d4c56340, pm.get_tid()=20277, tenant_id=1) [2024-09-13 13:02:18.241841] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20278][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1713691951104) [2024-09-13 13:02:18.241972] INFO register_pm (ob_page_manager.cpp:40) [20278][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d4cd4340, pm.get_tid()=20278, tenant_id=500) [2024-09-13 13:02:18.241995] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20278][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=169) [2024-09-13 13:02:18.242001] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20278][][T1][Y0-0000000000000000-0-0] [lt=5] Init thread local success [2024-09-13 13:02:18.242006] INFO unregister_pm (ob_page_manager.cpp:50) [20278][][T1][Y0-0000000000000000-0-0] [lt=4] unregister pm finish(&pm=0x2b07d4cd4340, pm.get_tid()=20278) [2024-09-13 13:02:18.242053] INFO register_pm (ob_page_manager.cpp:40) [20278][][T1][Y0-0000000000000000-0-0] [lt=43] register pm finish(ret=0, &pm=0x2b07d4cd4340, pm.get_tid()=20278, tenant_id=1) [2024-09-13 13:02:18.242215] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20279][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1717986918400) [2024-09-13 13:02:18.242339] INFO register_pm (ob_page_manager.cpp:40) [20279][][T0][Y0-0000000000000000-0-0] [lt=15] register pm finish(ret=0, &pm=0x2b07d4d52340, pm.get_tid()=20279, tenant_id=500) [2024-09-13 13:02:18.242370] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20279][][T1][Y0-0000000000000000-0-0] [lt=24] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=170) [2024-09-13 13:02:18.242378] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20279][][T1][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:18.242385] INFO unregister_pm (ob_page_manager.cpp:50) [20279][][T1][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07d4d52340, pm.get_tid()=20279) [2024-09-13 13:02:18.242403] INFO register_pm (ob_page_manager.cpp:40) [20279][][T1][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d4d52340, pm.get_tid()=20279, tenant_id=1) [2024-09-13 13:02:18.242578] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20280][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1722281885696) [2024-09-13 13:02:18.242712] INFO register_pm (ob_page_manager.cpp:40) [20280][][T0][Y0-0000000000000000-0-0] [lt=34] register pm finish(ret=0, &pm=0x2b07d4dd0340, pm.get_tid()=20280, tenant_id=500) [2024-09-13 13:02:18.242733] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20280][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=171) [2024-09-13 13:02:18.242738] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20280][][T1][Y0-0000000000000000-0-0] [lt=5] Init thread local success [2024-09-13 13:02:18.242743] INFO unregister_pm (ob_page_manager.cpp:50) [20280][][T1][Y0-0000000000000000-0-0] [lt=4] unregister pm finish(&pm=0x2b07d4dd0340, pm.get_tid()=20280) [2024-09-13 13:02:18.242751] INFO register_pm (ob_page_manager.cpp:40) [20280][][T1][Y0-0000000000000000-0-0] [lt=7] register pm finish(ret=0, &pm=0x2b07d4dd0340, pm.get_tid()=20280, tenant_id=1) [2024-09-13 13:02:18.242961] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20281][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1726576852992) [2024-09-13 13:02:18.243056] INFO register_pm (ob_page_manager.cpp:40) [20281][][T0][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07d4e56340, pm.get_tid()=20281, tenant_id=500) [2024-09-13 13:02:18.243087] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20281][][T1][Y0-0000000000000000-0-0] [lt=17] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=172) [2024-09-13 13:02:18.243096] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20281][][T1][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:18.243100] INFO unregister_pm (ob_page_manager.cpp:50) [20281][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d4e56340, pm.get_tid()=20281) [2024-09-13 13:02:18.243111] INFO register_pm (ob_page_manager.cpp:40) [20281][][T1][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d4e56340, pm.get_tid()=20281, tenant_id=1) [2024-09-13 13:02:18.243272] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20282][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1730871820288) [2024-09-13 13:02:18.243365] INFO register_pm (ob_page_manager.cpp:40) [20282][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d4ed4340, pm.get_tid()=20282, tenant_id=500) [2024-09-13 13:02:18.243393] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20282][][T1][Y0-0000000000000000-0-0] [lt=13] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=173) [2024-09-13 13:02:18.243400] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20282][][T1][Y0-0000000000000000-0-0] [lt=6] Init thread local success [2024-09-13 13:02:18.243404] INFO unregister_pm (ob_page_manager.cpp:50) [20282][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d4ed4340, pm.get_tid()=20282) [2024-09-13 13:02:18.243421] INFO register_pm (ob_page_manager.cpp:40) [20282][][T1][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d4ed4340, pm.get_tid()=20282, tenant_id=1) [2024-09-13 13:02:18.243623] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20283][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1735166787584) [2024-09-13 13:02:18.243703] INFO register_pm (ob_page_manager.cpp:40) [20283][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d4f52340, pm.get_tid()=20283, tenant_id=500) [2024-09-13 13:02:18.243747] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20283][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=174) [2024-09-13 13:02:18.243757] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20283][][T1][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:18.243762] INFO unregister_pm (ob_page_manager.cpp:50) [20283][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d4f52340, pm.get_tid()=20283) [2024-09-13 13:02:18.243777] INFO register_pm (ob_page_manager.cpp:40) [20283][][T1][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d4f52340, pm.get_tid()=20283, tenant_id=1) [2024-09-13 13:02:18.243979] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20284][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1739461754880) [2024-09-13 13:02:18.244092] INFO register_pm (ob_page_manager.cpp:40) [20284][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d4fd0340, pm.get_tid()=20284, tenant_id=500) [2024-09-13 13:02:18.244113] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20284][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=175) [2024-09-13 13:02:18.244120] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20284][][T1][Y0-0000000000000000-0-0] [lt=6] Init thread local success [2024-09-13 13:02:18.244124] INFO unregister_pm (ob_page_manager.cpp:50) [20284][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d4fd0340, pm.get_tid()=20284) [2024-09-13 13:02:18.244145] INFO register_pm (ob_page_manager.cpp:40) [20284][][T1][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07d4fd0340, pm.get_tid()=20284, tenant_id=1) [2024-09-13 13:02:18.244322] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20285][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1743756722176) [2024-09-13 13:02:18.244417] INFO register_pm (ob_page_manager.cpp:40) [20285][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d5056340, pm.get_tid()=20285, tenant_id=500) [2024-09-13 13:02:18.244446] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20285][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=176) [2024-09-13 13:02:18.244453] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20285][][T1][Y0-0000000000000000-0-0] [lt=6] Init thread local success [2024-09-13 13:02:18.244459] INFO unregister_pm (ob_page_manager.cpp:50) [20285][][T1][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07d5056340, pm.get_tid()=20285) [2024-09-13 13:02:18.244478] INFO register_pm (ob_page_manager.cpp:40) [20285][][T1][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07d5056340, pm.get_tid()=20285, tenant_id=1) [2024-09-13 13:02:18.244658] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20286][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1748051689472) [2024-09-13 13:02:18.244734] INFO register_pm (ob_page_manager.cpp:40) [20286][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d50d4340, pm.get_tid()=20286, tenant_id=500) [2024-09-13 13:02:18.244759] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20286][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=177) [2024-09-13 13:02:18.244764] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20286][][T1][Y0-0000000000000000-0-0] [lt=4] Init thread local success [2024-09-13 13:02:18.244768] INFO unregister_pm (ob_page_manager.cpp:50) [20286][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d50d4340, pm.get_tid()=20286) [2024-09-13 13:02:18.244778] INFO register_pm (ob_page_manager.cpp:40) [20286][][T1][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d50d4340, pm.get_tid()=20286, tenant_id=1) [2024-09-13 13:02:18.244961] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20287][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1752346656768) [2024-09-13 13:02:18.245042] INFO register_pm (ob_page_manager.cpp:40) [20287][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d5152340, pm.get_tid()=20287, tenant_id=500) [2024-09-13 13:02:18.245067] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20287][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=178) [2024-09-13 13:02:18.245075] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20287][][T1][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:18.245101] INFO unregister_pm (ob_page_manager.cpp:50) [20287][][T1][Y0-0000000000000000-0-0] [lt=25] unregister pm finish(&pm=0x2b07d5152340, pm.get_tid()=20287) [2024-09-13 13:02:18.245109] INFO register_pm (ob_page_manager.cpp:40) [20287][][T1][Y0-0000000000000000-0-0] [lt=7] register pm finish(ret=0, &pm=0x2b07d5152340, pm.get_tid()=20287, tenant_id=1) [2024-09-13 13:02:18.245232] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20288][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1756641624064) [2024-09-13 13:02:18.245304] INFO register_pm (ob_page_manager.cpp:40) [20288][][T0][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d51d0340, pm.get_tid()=20288, tenant_id=500) [2024-09-13 13:02:18.245326] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20288][][T1][Y0-0000000000000000-0-0] [lt=14] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=179) [2024-09-13 13:02:18.245331] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20288][][T1][Y0-0000000000000000-0-0] [lt=4] Init thread local success [2024-09-13 13:02:18.245335] INFO unregister_pm (ob_page_manager.cpp:50) [20288][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d51d0340, pm.get_tid()=20288) [2024-09-13 13:02:18.245347] INFO register_pm (ob_page_manager.cpp:40) [20288][][T1][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d51d0340, pm.get_tid()=20288, tenant_id=1) [2024-09-13 13:02:18.245537] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20289][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1760936591360) [2024-09-13 13:02:18.245633] INFO register_pm (ob_page_manager.cpp:40) [20289][][T0][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d5256340, pm.get_tid()=20289, tenant_id=500) [2024-09-13 13:02:18.245649] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20289][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=180) [2024-09-13 13:02:18.245657] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20289][][T1][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:18.245662] INFO unregister_pm (ob_page_manager.cpp:50) [20289][][T1][Y0-0000000000000000-0-0] [lt=4] unregister pm finish(&pm=0x2b07d5256340, pm.get_tid()=20289) [2024-09-13 13:02:18.245687] INFO register_pm (ob_page_manager.cpp:40) [20289][][T1][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07d5256340, pm.get_tid()=20289, tenant_id=1) [2024-09-13 13:02:18.245833] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20290][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1765231558656) [2024-09-13 13:02:18.245958] INFO register_pm (ob_page_manager.cpp:40) [20290][][T0][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d52d4340, pm.get_tid()=20290, tenant_id=500) [2024-09-13 13:02:18.245983] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20290][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=181) [2024-09-13 13:02:18.245990] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20290][][T1][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:18.245998] INFO unregister_pm (ob_page_manager.cpp:50) [20290][][T1][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07d52d4340, pm.get_tid()=20290) [2024-09-13 13:02:18.246015] INFO register_pm (ob_page_manager.cpp:40) [20290][][T1][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d52d4340, pm.get_tid()=20290, tenant_id=1) [2024-09-13 13:02:18.246302] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20291][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1769526525952) [2024-09-13 13:02:18.246403] INFO register_pm (ob_page_manager.cpp:40) [20291][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d5352340, pm.get_tid()=20291, tenant_id=500) [2024-09-13 13:02:18.246431] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20291][][T1][Y0-0000000000000000-0-0] [lt=22] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=182) [2024-09-13 13:02:18.246447] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20291][][T1][Y0-0000000000000000-0-0] [lt=15] Init thread local success [2024-09-13 13:02:18.246454] INFO unregister_pm (ob_page_manager.cpp:50) [20291][][T1][Y0-0000000000000000-0-0] [lt=6] unregister pm finish(&pm=0x2b07d5352340, pm.get_tid()=20291) [2024-09-13 13:02:18.246466] INFO register_pm (ob_page_manager.cpp:40) [20291][][T1][Y0-0000000000000000-0-0] [lt=10] register pm finish(ret=0, &pm=0x2b07d5352340, pm.get_tid()=20291, tenant_id=1) [2024-09-13 13:02:18.246633] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20292][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1773821493248) [2024-09-13 13:02:18.246733] INFO register_pm (ob_page_manager.cpp:40) [20292][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07d53d0340, pm.get_tid()=20292, tenant_id=500) [2024-09-13 13:02:18.246763] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20292][][T1][Y0-0000000000000000-0-0] [lt=16] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=183) [2024-09-13 13:02:18.246773] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20292][][T1][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:18.246779] INFO unregister_pm (ob_page_manager.cpp:50) [20292][][T1][Y0-0000000000000000-0-0] [lt=4] unregister pm finish(&pm=0x2b07d53d0340, pm.get_tid()=20292) [2024-09-13 13:02:18.246795] INFO register_pm (ob_page_manager.cpp:40) [20292][][T1][Y0-0000000000000000-0-0] [lt=13] register pm finish(ret=0, &pm=0x2b07d53d0340, pm.get_tid()=20292, tenant_id=1) [2024-09-13 13:02:18.246957] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20293][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1778116460544) [2024-09-13 13:02:18.247060] INFO register_pm (ob_page_manager.cpp:40) [20293][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07d5456340, pm.get_tid()=20293, tenant_id=500) [2024-09-13 13:02:18.247101] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20293][][T1][Y0-0000000000000000-0-0] [lt=19] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=184) [2024-09-13 13:02:18.247113] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20293][][T1][Y0-0000000000000000-0-0] [lt=11] Init thread local success [2024-09-13 13:02:18.247125] INFO unregister_pm (ob_page_manager.cpp:50) [20293][][T1][Y0-0000000000000000-0-0] [lt=10] unregister pm finish(&pm=0x2b07d5456340, pm.get_tid()=20293) [2024-09-13 13:02:18.247143] INFO register_pm (ob_page_manager.cpp:40) [20293][][T1][Y0-0000000000000000-0-0] [lt=17] register pm finish(ret=0, &pm=0x2b07d5456340, pm.get_tid()=20293, tenant_id=1) [2024-09-13 13:02:18.247290] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20294][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1782411427840) [2024-09-13 13:02:18.247385] INFO register_pm (ob_page_manager.cpp:40) [20294][][T0][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d54d4340, pm.get_tid()=20294, tenant_id=500) [2024-09-13 13:02:18.247406] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20294][][T1][Y0-0000000000000000-0-0] [lt=11] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=185) [2024-09-13 13:02:18.247414] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20294][][T1][Y0-0000000000000000-0-0] [lt=8] Init thread local success [2024-09-13 13:02:18.247407] INFO [SERVER.OMT] check_worker_count (ob_tenant.cpp:1743) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=17] worker thread created(id_=1, token=10) [2024-09-13 13:02:18.247418] INFO unregister_pm (ob_page_manager.cpp:50) [20294][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d54d4340, pm.get_tid()=20294) [2024-09-13 13:02:18.247427] INFO register_pm (ob_page_manager.cpp:40) [20294][][T1][Y0-0000000000000000-0-0] [lt=8] register pm finish(ret=0, &pm=0x2b07d54d4340, pm.get_tid()=20294, tenant_id=1) [2024-09-13 13:02:18.247425] WDIAG [SERVER.OMT] get_tenant_base_with_lock (ob_multi_tenant.cpp:220) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11][errcode=-5150] get tenant from omt failed(ret=-5150, tenant_id=1) [2024-09-13 13:02:18.247554] WDIAG [SHARE] switch_to (ob_tenant_base.cpp:550) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8][errcode=-5150] switch tenant fail(tenant_id=1, ret=-5150, lbt()="0x24edc06b 0x11aae550 0x24c401d0 0x24dd221c 0x24dd1628 0xb2173a8 0xb214999 0xb8fbc34 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75") [2024-09-13 13:02:18.247583] INFO [STORAGE] set_tenant_mem_limit (ob_tenant_freezer.cpp:917) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] [TenantFreezer] set tenant mem limit(tenant id=1, mem_lower_limit=3221225472, mem_upper_limit=3221225472, mem_memstore_limit=1288490160, memstore_freeze_trigger_limit=257698020, mem_tenant_limit=3221225472, mem_tenant_hold=292696064, mem_memstore_used=0) [2024-09-13 13:02:18.247897] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DBF-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.248714] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19930][pnio1][T0][YB42AC103326-00062119D7143DBF-0-0] [lt=10][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.249059] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D7143DC0-0-0] [lt=19][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.249409] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19932][pnio1][T0][YB42AC103326-00062119D7143DC0-0-0] [lt=11][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.255322] INFO [STORAGE.REDO] notify_flush (ob_storage_log_writer.cpp:552) [20010][OB_SLOG][T0][Y0-0000000000000000-0-0] [lt=20] Successfully flush(log_item={start_cursor:ObLogCursor{file_id=1, log_id=2, offset=266}, end_cursor:ObLogCursor{file_id=1, log_id=3, offset=345}, is_inited:true, is_local:false, buf_size:8192, buf:0x2b079e866050, len:3830, log_data_len:79, seq:2, flush_finish:false, flush_ret:0}) [2024-09-13 13:02:18.255386] INFO [SERVER.OMT] set_create_status (ob_tenant.cpp:910) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=12] set create status(tenant_id=1, unit_id=1000, new_status=1, old_status=0, tenant_meta={unit:{tenant_id:1, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"hidden_sys_unit", resource:{min_cpu:2, max_cpu:2, memory_size:"3GB", log_disk_size:"0GB", min_iops:9223372036854775807, max_iops:9223372036854775807, iops_weight:2}}, mode:0, create_timestamp:1726203737966288, is_removed:false}, super_block:{tenant_id:1, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:0}) [2024-09-13 13:02:18.255456] INFO [PALF] update_transport_compress_options (log_rpc.cpp:79) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=42] update_transport_compress_options success(compress_opt={enable_transport_compress_:false, transport_compress_func_:2}) [2024-09-13 13:02:18.255472] INFO [PALF] update_disk_options_not_guarded_by_lock_ (palf_env_impl.cpp:164) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] expand log disk success(curr_stop_write_limit_size=0, next_stop_write_limit_size=0, this={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:0}) [2024-09-13 13:02:18.255488] INFO [PALF] update_options (palf_env_impl.cpp:927) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=14] update_options successs(options={disk_options_:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, compress_options_:{enable_transport_compress_:false, transport_compress_func_:2}, rebuild_replica_log_lag_threshold_:0}, this={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:18.255512] INFO [CLOG] update_palf_options_except_disk_usage_limit_size (ob_log_service.cpp:578) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=22] palf update_options success(MTL_ID()=1, ret=0, palf_opts={disk_options_:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, compress_options_:{enable_transport_compress_:false, transport_compress_func_:2}, rebuild_replica_log_lag_threshold_:0}) [2024-09-13 13:02:18.255530] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] set thread score successfully(score=0, prio="PRIO_COMPACTION_HIGH", up_limits_[priority]=6, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255544] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] set thread score successfully(score=0, prio="PRIO_COMPACTION_MID", up_limits_[priority]=6, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255557] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=9] set thread score successfully(score=0, prio="PRIO_COMPACTION_LOW", up_limits_[priority]=6, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255564] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=5] set thread score successfully(score=0, prio="PRIO_HA_HIGH", up_limits_[priority]=8, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255571] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] set thread score successfully(score=0, prio="PRIO_HA_MID", up_limits_[priority]=5, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255577] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=4] set thread score successfully(score=0, prio="PRIO_HA_LOW", up_limits_[priority]=2, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255587] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] set thread score successfully(score=0, prio="PRIO_DDL", up_limits_[priority]=2, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255599] INFO [COMMON] set_thread_score (ob_dag_scheduler.cpp:3255) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] set thread score successfully(score=0, prio="PRIO_TTL", up_limits_[priority]=2, work_thread_num=43, default_work_thread_num=43) [2024-09-13 13:02:18.255619] INFO [STORAGE.TRANS] update_max_trace_info_size (ob_checkpoint_diagnose.cpp:409) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] max_trace_info_size update.(this={first_pos:0, last_pos:-1, max_trace_info_size:100}) [2024-09-13 13:02:18.255649] INFO [SHARE] update_throttle_config (ob_throttle_unit.ipp:519) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] [Throttle] Update Config(this=0x2b07c3393208, enable_adaptive_limit_=true, Unit Name=TxShare, Config Specify Resource Limit(MB)=1536, Resource Limit(MB)=1536, Throttle Trigger(MB)=921, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=3.212808046489710634e+00, New Resource Limit(MB)=1535, New trigger percentage=60, New Throttle Duration=7200000000) [2024-09-13 13:02:18.255726] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=76] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=3.071999998092651367e+02, this=0x2b07c3393208, enable_adaptive_limit_=true, Unit Name=TxShare, Config Specify Resource Limit(MB)=1535, Resource Limit(MB)=1536, Throttle Trigger(MB)=921, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=3.212808046489710634e+00) [2024-09-13 13:02:18.255741] INFO [SHARE] update_throttle_config (ob_throttle_unit.ipp:519) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=12] [Throttle] Update Config(this=0x2b07c338c170, enable_adaptive_limit_=false, Unit Name=Memstore, Config Specify Resource Limit(MB)=1228, Resource Limit(MB)=1228, Throttle Trigger(MB)=737, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=7.831060060963772607e+00, New Resource Limit(MB)=1228, New trigger percentage=60, New Throttle Duration=7200000000) [2024-09-13 13:02:18.255754] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=13] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=2.457599945068359375e+02, this=0x2b07c338c170, enable_adaptive_limit_=false, Unit Name=Memstore, Config Specify Resource Limit(MB)=1228, Resource Limit(MB)=1228, Throttle Trigger(MB)=737, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=7.831060728156441719e+00) [2024-09-13 13:02:18.255762] INFO [SHARE] update_throttle_config (ob_throttle_unit.ipp:519) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] [Throttle] Update Config(this=0x2b07c33850d8, enable_adaptive_limit_=false, Unit Name=TxData, Config Specify Resource Limit(MB)=614, Resource Limit(MB)=614, Throttle Trigger(MB)=368, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=2.590156323699679671e-08, New Resource Limit(MB)=614, New trigger percentage=60, New Throttle Duration=7200000000) [2024-09-13 13:02:18.255770] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=8] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=3.247203024193548481e+04, this=0x2b07c33850d8, enable_adaptive_limit_=false, Unit Name=TxData, Config Specify Resource Limit(MB)=614, Resource Limit(MB)=614, Throttle Trigger(MB)=368, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=2.590156524719408803e-08) [2024-09-13 13:02:18.255778] INFO [SHARE] update_throttle_config (ob_throttle_unit.ipp:519) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=7] [Throttle] Update Config(this=0x2b07c337e040, enable_adaptive_limit_=false, Unit Name=Mds, Config Specify Resource Limit(MB)=307, Resource Limit(MB)=307, Throttle Trigger(MB)=184, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=4.144004296208466418e-07, New Resource Limit(MB)=307, New trigger percentage=60, New Throttle Duration=7200000000) [2024-09-13 13:02:18.255784] INFO [SHARE] update_decay_factor_ (ob_throttle_unit.ipp:266) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=6] [Throttle] Update Throttle Unit Config(is_adaptive_update=false, N=1.623601512096774240e+04, this=0x2b07c337e040, enable_adaptive_limit_=false, Unit Name=Mds, Config Specify Resource Limit(MB)=307, Resource Limit(MB)=307, Throttle Trigger(MB)=184, Throttle Percentage=60, Max Duration(us)=7200000000, Decay Factor=4.144004553494351452e-07) [2024-09-13 13:02:18.255794] INFO [SHARE] update_throttle_config (ob_shared_memory_allocator_mgr.cpp:88) [19877][observer][T1][Y0-0000000000000001-0-0] [lt=10] [Throttle] Update Config(tenant_id_=1, total_memory=3221225472, share_mem_limit_percentage=50, share_mem_limit=1610612700, tenant_memstore_limit_percentage=40, memstore_limit=1288490160, tx_data_limit_percentage=20, tx_data_limit=644245080, mds_limit_percentage=10, mds_limit=322122540, trigger_percentage=60, max_duration=7200000000) [2024-09-13 13:02:18.255808] INFO [SERVER.OMT] update_tenant_config (ob_multi_tenant.cpp:1316) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] update_tenant_config success(tenant_id=1) [2024-09-13 13:02:18.255829] INFO [SERVER.OMT] create_tenant (ob_multi_tenant.cpp:1086) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=19] finish create new tenant(ret=0, tenant_id=1, write_slog=true, create_step=5, bucket_lock_idx=9780) [2024-09-13 13:02:18.255849] INFO [SERVER] try_update_hidden_sys (ob_server.cpp:1201) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] finish create hidden sys(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.255863] INFO [SERVER] start (ob_server.cpp:936) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=11] success to update hidden sys tenant [2024-09-13 13:02:18.255871] INFO [STORAGE.TRANS] start (ob_weak_read_service.cpp:66) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=7] [WRS] weak read service thread start [2024-09-13 13:02:18.255888] INFO [SERVER] start (ob_server.cpp:942) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=16] success to start weak read service [2024-09-13 13:02:18.256026] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=0, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.256179] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20295][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1786706395136) [2024-09-13 13:02:18.256189] INFO [LIB] ObSliceAlloc (ob_slice_alloc.h:321) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=24] ObSliceAlloc init finished(bsize_=7936, isize_=80, slice_limit_=7536, tmallocator_=NULL) [2024-09-13 13:02:18.256324] INFO alloc_array (ob_dchash.h:415) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=13] DCHash: alloc_array: N9oceanbase3sql14SessionInfoKeyE this=0x55a386aef540 array=0x2b07d5a04030 array_size=65536 prev_array=(nil) [2024-09-13 13:02:18.256333] INFO register_pm (ob_page_manager.cpp:40) [20295][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07c23d0340, pm.get_tid()=20295, tenant_id=500) [2024-09-13 13:02:18.256372] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:183) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=19] blacklist refresh thread start(thread_index=0) [2024-09-13 13:02:18.256370] INFO [STORAGE.TRANS] start (ob_black_list.cpp:78) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] BLService start success [2024-09-13 13:02:18.256384] INFO [SERVER] start (ob_server.cpp:948) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=9] success to start blacklist service [2024-09-13 13:02:18.256653] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20296][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1791001362432) [2024-09-13 13:02:18.256805] INFO register_pm (ob_page_manager.cpp:40) [20296][][T0][Y0-0000000000000000-0-0] [lt=19] register pm finish(ret=0, &pm=0x2b07d5c56340, pm.get_tid()=20296, tenant_id=500) [2024-09-13 13:02:18.256830] INFO [SERVER] start (ob_server.cpp:955) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] success to start root service monitor [2024-09-13 13:02:18.256842] INFO [SERVER] start (ob_service.cpp:295) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=8] [OBSERVICE_NOTICE] start ob_service begin [2024-09-13 13:02:18.257140] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20297][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1795296329728) [2024-09-13 13:02:18.257311] INFO register_pm (ob_page_manager.cpp:40) [20297][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d5cd4340, pm.get_tid()=20297, tenant_id=500) [2024-09-13 13:02:18.257392] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=5] ObTimer create success(this=0x55a386e0f060, thread_id=20297, lbt()=0x24edc06b 0x13836960 0xb518bfc 0xb46acb3 0xb8f88df 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.257777] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20298][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1799591297024) [2024-09-13 13:02:18.257930] INFO run1 (ob_timer.cpp:361) [20297][][T0][Y0-0000000000000000-0-0] [lt=19] timer thread started(this=0x55a386e0f060, tid=20297, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.257954] INFO register_pm (ob_page_manager.cpp:40) [20298][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07d5d52340, pm.get_tid()=20298, tenant_id=500) [2024-09-13 13:02:18.258005] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=14] ObTimer create success(this=0x55a386e0f160, thread_id=20298, lbt()=0x24edc06b 0x13836960 0xb518c59 0xb46acb3 0xb8f88df 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.258270] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20299][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1803886264320) [2024-09-13 13:02:18.258472] INFO register_pm (ob_page_manager.cpp:40) [20299][][T0][Y0-0000000000000000-0-0] [lt=16] register pm finish(ret=0, &pm=0x2b07d5dd0340, pm.get_tid()=20299, tenant_id=500) [2024-09-13 13:02:18.258506] INFO run1 (ob_timer.cpp:361) [20298][][T0][Y0-0000000000000000-0-0] [lt=20] timer thread started(this=0x55a386e0f160, tid=20298, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.258516] INFO create (ob_timer.cpp:72) [19877][observer][T0][Y0-0000000000000001-0-0] [lt=10] ObTimer create success(this=0x55a386e0f260, thread_id=20299, lbt()=0x24edc06b 0x13836960 0xb518cb6 0xb46acb3 0xb8f88df 0x7ff47f5 0x2b0795fc03d5 0x5e9ab75) [2024-09-13 13:02:18.258531] INFO [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] begin register_self_busy_wait [2024-09-13 13:02:18.258768] INFO [RPC.OBRPC] regist_dest_if_need (ob_net_keepalive.cpp:326) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] add new rs, addr: "172.16.51.35:2882" [2024-09-13 13:02:18.258811] INFO pktc_sk_new (pktc_sk_factory.h:78) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=13] PNIO sk_new: s=0x2b07b0a3a048 [2024-09-13 13:02:18.258906] INFO pktc_do_connect (pktc_post.h:19) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=12] PNIO sk_new: sk=0x2b07b0a3a048, fd=114 [2024-09-13 13:02:18.258920] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=10] [ussl] write client fd succ, fd:114, gid:0x100000000, need_send_negotiation:1 [2024-09-13 13:02:18.258921] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] sock regist: 0x2b07b3e1f850 fd=115 [2024-09-13 13:02:18.258926] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock regist: 0x2b07b0a3a048 fd=114 [2024-09-13 13:02:18.258924] INFO run1 (ob_timer.cpp:361) [20299][][T0][Y0-0000000000000000-0-0] [lt=19] timer thread started(this=0x55a386e0f260, tid=20299, lbt()=0x24edc06b 0x24db8067 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead) [2024-09-13 13:02:18.258935] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock not ready: 0x2b07b0a3a048, fd=114 [2024-09-13 13:02:18.258938] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=14] [ussl] accept new connection, fd:115, src_addr:172.16.51.35:50072 [2024-09-13 13:02:18.258956] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] sock regist: 0x2b07b3e1f920 fd=114 [2024-09-13 13:02:18.258989] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] client send negotiation message succ, fd:114, addr:"172.16.51.35:50072", auth_method:NONE, gid:0x100000000 [2024-09-13 13:02:18.259002] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] [ussl] give back fd to origin epoll succ, client_fd:114, client_epfd:65, event:0x8000000d, client_addr:"172.16.51.35:50072", need_close:0 [2024-09-13 13:02:18.259016] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] auth mothod is NONE, the fd will be dispatched, fd:115, src_addr:172.16.51.35:50072 [2024-09-13 13:02:18.259024] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] PNIO dispatch fd to certain group, fd:115, gid:0x100000000 [2024-09-13 13:02:18.259015] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock connect OK: 0x2b07b0a3a048 fd:114:local:"172.16.51.35:2882":remote:"172.16.51.35:2882" [2024-09-13 13:02:18.259064] INFO pkts_sk_init (pkts_sk_factory.h:23) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=13] PNIO set pkts_sk_t sock_id s=0x2b07b0a3aa98, s->id=65533 [2024-09-13 13:02:18.259071] INFO pkts_sk_new (pkts_sk_factory.h:51) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sk_new: s=0x2b07b0a3aa98 [2024-09-13 13:02:18.259085] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO sock regist: 0x2b07b0a3aa98 fd=115 [2024-09-13 13:02:18.259093] INFO on_accept (listenfd.c:39) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO accept new connection, ns=0x2b07b0a3aa98, fd=fd:115:local:"172.16.51.35:50072":remote:"172.16.51.35:50072" [2024-09-13 13:02:18.259127] WDIAG listenfd_handle_event (listenfd.c:71) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:18.259244] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=20][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:18.259257] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4638] [2024-09-13 13:02:18.259416] INFO pktc_sk_new (pktc_sk_factory.h:78) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=1] PNIO sk_new: s=0x2b07b0a3b4a8 [2024-09-13 13:02:18.259428] INFO [RPC.OBRPC] regist_dest_if_need (ob_net_keepalive.cpp:326) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] add new rs, addr: "172.16.51.36:2882" [2024-09-13 13:02:18.259461] INFO pktc_do_connect (pktc_post.h:19) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=8] PNIO sk_new: sk=0x2b07b0a3b4a8, fd=116 [2024-09-13 13:02:18.259470] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] write client fd succ, fd:116, gid:0x100000001, need_send_negotiation:1 [2024-09-13 13:02:18.259474] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock regist: 0x2b07b0a3b4a8 fd=116 [2024-09-13 13:02:18.259480] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=3] PNIO sock not ready: 0x2b07b0a3b4a8, fd=116 [2024-09-13 13:02:18.259491] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] sock regist: 0x2b07b3e1f9a0 fd=116 [2024-09-13 13:02:18.259488] INFO pktc_sk_new (pktc_sk_factory.h:78) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=2] PNIO sk_new: s=0x2b07b0a54048 [2024-09-13 13:02:18.259495] INFO [RPC.OBRPC] regist_dest_if_need (ob_net_keepalive.cpp:326) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15] add new rs, addr: "172.16.51.37:2882" [2024-09-13 13:02:18.259502] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] sock regist: 0x2b07b3e1fa80 fd=117 [2024-09-13 13:02:18.259507] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] accept new connection, fd:117, src_addr:172.16.51.35:50074 [2024-09-13 13:02:18.259517] INFO pktc_sk_new (pktc_sk_factory.h:78) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=8] PNIO sk_new: s=0x2b07b0a54a98 [2024-09-13 13:02:18.259525] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] client send negotiation message succ, fd:116, addr:"172.16.51.35:50074", auth_method:NONE, gid:0x100000001 [2024-09-13 13:02:18.259533] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] give back fd to origin epoll succ, client_fd:116, client_epfd:72, event:0x8000000d, client_addr:"172.16.51.35:50074", need_close:0 [2024-09-13 13:02:18.259534] INFO pktc_do_connect (pktc_post.h:19) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO sk_new: sk=0x2b07b0a54048, fd=118 [2024-09-13 13:02:18.259537] INFO pktc_do_connect (pktc_post.h:19) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sk_new: sk=0x2b07b0a54a98, fd=119 [2024-09-13 13:02:18.259544] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] auth mothod is NONE, the fd will be dispatched, fd:117, src_addr:172.16.51.35:50074 [2024-09-13 13:02:18.259544] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] write client fd succ, fd:118, gid:0x100000002, need_send_negotiation:1 [2024-09-13 13:02:18.259544] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] write client fd succ, fd:119, gid:0x100000000, need_send_negotiation:1 [2024-09-13 13:02:18.259548] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] PNIO dispatch fd to certain group, fd:117, gid:0x100000001 [2024-09-13 13:02:18.259550] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock regist: 0x2b07b0a54048 fd=118 [2024-09-13 13:02:18.259550] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock regist: 0x2b07b0a54a98 fd=119 [2024-09-13 13:02:18.259557] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] sock regist: 0x2b07b3e1f9a0 fd=118 [2024-09-13 13:02:18.259558] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock not ready: 0x2b07b0a54a98, fd=119 [2024-09-13 13:02:18.259559] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock not ready: 0x2b07b0a54048, fd=118 [2024-09-13 13:02:18.259562] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] sock regist: 0x2b07b3e1fa80 fd=119 [2024-09-13 13:02:18.259570] INFO pkts_sk_init (pkts_sk_factory.h:23) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO set pkts_sk_t sock_id s=0x2b07b0a554e8, s->id=65533 [2024-09-13 13:02:18.259574] INFO pkts_sk_new (pkts_sk_factory.h:51) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=3] PNIO sk_new: s=0x2b07b0a554e8 [2024-09-13 13:02:18.259580] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=3] PNIO sock regist: 0x2b07b0a554e8 fd=117 [2024-09-13 13:02:18.259586] INFO on_accept (listenfd.c:39) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO accept new connection, ns=0x2b07b0a554e8, fd=fd:117:local:"172.16.51.35:50074":remote:"172.16.51.35:50074" [2024-09-13 13:02:18.259603] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=14] PNIO sock connect OK: 0x2b07b0a3b4a8 fd:116:local:"172.16.51.35:2882":remote:"172.16.51.35:2882" [2024-09-13 13:02:18.259635] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=12] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.259684] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=7] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:18.259709] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=21] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:18.259912] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] client send negotiation message succ, fd:119, addr:"172.16.51.35:55156", auth_method:NONE, gid:0x100000000 [2024-09-13 13:02:18.259925] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] give back fd to origin epoll succ, client_fd:119, client_epfd:65, event:0x8000000d, client_addr:"172.16.51.35:55156", need_close:0 [2024-09-13 13:02:18.259933] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock connect OK: 0x2b07b0a54a98 fd:119:local:"172.16.51.37:2882":remote:"172.16.51.37:2882" [2024-09-13 13:02:18.259938] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] client send negotiation message succ, fd:118, addr:"172.16.51.35:38120", auth_method:NONE, gid:0x100000002 [2024-09-13 13:02:18.259954] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] give back fd to origin epoll succ, client_fd:118, client_epfd:79, event:0x8000000d, client_addr:"172.16.51.35:38120", need_close:0 [2024-09-13 13:02:18.259972] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock connect OK: 0x2b07b0a54048 fd:118:local:"172.16.51.36:2882":remote:"172.16.51.36:2882" [2024-09-13 13:02:18.260801] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20300][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1808181231616) [2024-09-13 13:02:18.260949] INFO register_pm (ob_page_manager.cpp:40) [20300][][T0][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07d5552340, pm.get_tid()=20300, tenant_id=500) [2024-09-13 13:02:18.260984] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20300][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=186) [2024-09-13 13:02:18.260997] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20300][][T1][Y0-0000000000000000-0-0] [lt=12] Init thread local success [2024-09-13 13:02:18.261010] INFO unregister_pm (ob_page_manager.cpp:50) [20300][][T1][Y0-0000000000000000-0-0] [lt=10] unregister pm finish(&pm=0x2b07d5552340, pm.get_tid()=20300) [2024-09-13 13:02:18.261024] INFO register_pm (ob_page_manager.cpp:40) [20300][][T1][Y0-0000000000000000-0-0] [lt=11] register pm finish(ret=0, &pm=0x2b07d5552340, pm.get_tid()=20300, tenant_id=1) [2024-09-13 13:02:18.261142] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20301][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1812476198912) [2024-09-13 13:02:18.261256] INFO register_pm (ob_page_manager.cpp:40) [20301][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d55d0340, pm.get_tid()=20301, tenant_id=500) [2024-09-13 13:02:18.261272] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] Cache replace map node details(ret=0, replace_node_count=0, replace_time=2621, replace_start_pos=62914, replace_num=62914) [2024-09-13 13:02:18.261287] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=13] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:18.261293] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20301][][T1][Y0-0000000000000000-0-0] [lt=20] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=187) [2024-09-13 13:02:18.261292] INFO [SERVER.OMT] check_worker_count (ob_tenant.cpp:507) [19931][pnio1][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3] worker thread created(tenant_->id()=1, group_id_=9, token=2) [2024-09-13 13:02:18.261299] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20301][][T1][Y0-0000000000000000-0-0] [lt=5] Init thread local success [2024-09-13 13:02:18.261286] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:18.261303] INFO unregister_pm (ob_page_manager.cpp:50) [20301][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d55d0340, pm.get_tid()=20301) [2024-09-13 13:02:18.261304] INFO [SERVER.OMT] recv_group_request (ob_tenant.cpp:1382) [19931][pnio1][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] create group successfully(id=1, group_id=9, group=0x2b07d28dc030) [2024-09-13 13:02:18.261311] INFO register_pm (ob_page_manager.cpp:40) [20301][][T1][Y0-0000000000000000-0-0] [lt=7] register pm finish(ret=0, &pm=0x2b07d55d0340, pm.get_tid()=20301, tenant_id=1) [2024-09-13 13:02:18.261319] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=27][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:18.261324] WDIAG listenfd_handle_event (listenfd.c:71) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=6][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:18.261331] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=11][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:18.261344] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=10][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:18.261371] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=10][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:18.261380] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=8][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:18.261393] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=7][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:18.261401] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:18.261403] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.261411] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=8][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:18.261420] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=8][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:18.261499] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=74][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:18.261514] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=11][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:18.261512] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.261524] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=9][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:18.261531] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:18.261537] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.261542] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=9][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:18.261553] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=8][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:18.261557] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.261587] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.261580] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=24][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:18.261597] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.261609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.261615] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=15][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:18.261622] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.261624] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.261631] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:18.261636] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.261642] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:18.261644] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:18.261648] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:18.261655] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=10][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:18.261667] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=9][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:18.261676] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.261698] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=12][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:18.261721] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=13][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:18.261728] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:18.261732] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:18.261754] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.261769] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:18.261783] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.261793] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C7D-0-0] [lt=9][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:18.261800] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:18.261806] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:18.261811] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:18.261815] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203738258000, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:18.261822] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:18.261831] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:18.261856] INFO [STORAGE.TRANS] print_stat_ (ob_black_list.cpp:398) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=24] start to print blacklist info [2024-09-13 13:02:18.261936] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4] ls blacklist refresh finish(cost_time=5553) [2024-09-13 13:02:18.261956] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.261974] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=17][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.261986] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.261998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.262006] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.262017] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.262029] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.262042] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:18.262054] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:18.262061] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:18.262130] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.262304] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.262324] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.262338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.262352] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.262367] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.262380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.262391] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.262404] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:18.262415] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:18.262425] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:18.262450] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:18.262463] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:18.262472] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:18.264603] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.264642] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.264780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.265347] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=1, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.266041] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.266065] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.266083] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.266100] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.266113] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.266126] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.266161] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738266155, replica_locations:[]}) [2024-09-13 13:02:18.266186] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.266201] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.266214] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.266238] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.266250] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.266261] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=24] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.266262] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.266302] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=32][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:18.266339] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=36][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.266354] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.266374] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:18.266388] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:18.266402] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:18.266421] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:18.266445] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:18.266462] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:18.266475] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:18.266485] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:18.266494] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:18.266509] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:18.266546] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:18.266558] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.266569] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:18.266579] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:18.266590] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:18.266602] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.266632] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] already timeout, do not need sleep(sleep_us=0, remain_us=1993099, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.270378] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.270471] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.270493] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.270506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.270521] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.270533] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.270547] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.270563] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738270562, replica_locations:[]}) [2024-09-13 13:02:18.270583] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.270610] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:18.270632] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.270647] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.270665] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.270675] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.270683] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:18.270707] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:18.270720] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.270862] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546147896, cache_obj->added_lc()=false, cache_obj->get_object_id()=1, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.271867] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.271917] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=49][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.272058] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.272293] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.272316] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.272330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.272345] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.272358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.272372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.272386] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738272386, replica_locations:[]}) [2024-09-13 13:02:18.272406] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.272420] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.272433] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.272462] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.272474] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.272481] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.272499] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:18.272513] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.272523] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.272534] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:18.272544] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:18.272563] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:18.272575] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:18.272587] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:18.272597] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:18.272606] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:18.272617] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:18.272627] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:18.272638] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:18.272652] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:18.272662] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.272673] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:18.272679] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:18.272690] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:18.272700] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=1, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.272719] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] will sleep(sleep_us=1000, remain_us=1987011, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.273969] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.274094] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.274115] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.274128] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.274142] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.274155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.274168] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.274183] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738274182, replica_locations:[]}) [2024-09-13 13:02:18.274201] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.274220] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:1, local_retry_times:1, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:18.274238] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.274248] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.274261] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.274271] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.274281] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:18.274296] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:18.274309] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.274348] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546151465, cache_obj->added_lc()=false, cache_obj->get_object_id()=2, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.275520] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.275549] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.275676] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=2, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.275690] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.275840] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.275862] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.275884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.275899] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.275912] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.275925] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.275940] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738275939, replica_locations:[]}) [2024-09-13 13:02:18.275958] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.275973] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.275985] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.276002] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.276014] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.276025] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.276041] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:18.276054] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.276065] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.276075] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:18.276082] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:18.276093] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:18.276103] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:18.276115] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:18.276125] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:18.276134] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:18.276143] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:18.276153] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:18.276163] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:18.276176] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:18.276187] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.276197] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:18.276207] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:18.276218] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:18.276224] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=2, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.276243] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] will sleep(sleep_us=2000, remain_us=1983488, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.276639] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.278472] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.278625] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.278646] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.278659] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.278673] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.278683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.278694] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.278707] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738278706, replica_locations:[]}) [2024-09-13 13:02:18.278724] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.278743] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:2, local_retry_times:2, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:18.278757] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.278767] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.278777] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.278788] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.278795] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:18.278807] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:18.278820] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.278855] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546155973, cache_obj->added_lc()=false, cache_obj->get_object_id()=3, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.279723] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.279753] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.279889] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.280116] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.280137] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.280150] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.280164] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.280173] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.280187] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.280201] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738280201, replica_locations:[]}) [2024-09-13 13:02:18.280220] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.280233] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.280246] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.280263] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.280274] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.280285] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.280301] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:18.280314] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.280321] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.280328] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:18.280334] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:18.280344] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:18.280354] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:18.280366] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:18.280375] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:18.280385] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:18.280394] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:18.280404] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:18.280413] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:18.280426] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:18.280445] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.280453] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:18.280463] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:18.280474] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:18.280484] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=3, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.280504] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] will sleep(sleep_us=3000, remain_us=1979227, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.283707] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.284107] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.284129] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.284141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.284155] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.284167] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.284180] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.284195] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738284195, replica_locations:[]}) [2024-09-13 13:02:18.284214] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.284233] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:3, local_retry_times:3, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:18.284249] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.284259] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.284271] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.284281] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.284287] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:18.284297] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:18.284310] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.284344] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546161461, cache_obj->added_lc()=false, cache_obj->get_object_id()=4, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.285145] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.285176] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.285262] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.285481] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.285504] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.285517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.285532] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.285544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.285558] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.285586] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738285586, replica_locations:[]}) [2024-09-13 13:02:18.285605] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.285618] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.285630] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.285645] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.285653] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.285664] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.285681] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:18.285694] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.285702] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.285710] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:18.285719] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:18.285727] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:18.285735] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:18.285743] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:18.285750] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:18.285758] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:18.285764] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:18.285771] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:18.285780] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:18.285790] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:18.285801] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.285811] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:18.285819] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:18.285827] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:18.285834] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=4, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.285853] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] will sleep(sleep_us=4000, remain_us=1973878, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.286265] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=3, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.286534] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=10][errcode=0] server is initiating(server_id=0, local_seq=4, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.287407] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.288231] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=15][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.290009] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.290229] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.290254] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.290267] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.290283] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.290295] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.290308] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.290324] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738290323, replica_locations:[]}) [2024-09-13 13:02:18.290343] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.290363] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:4, local_retry_times:4, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:18.290381] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.290391] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.290404] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.290411] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.290420] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:18.290448] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:18.290461] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.290495] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546167613, cache_obj->added_lc()=false, cache_obj->get_object_id()=5, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.291567] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.291590] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.291706] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.291909] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.291926] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.291936] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.291947] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.291955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.291964] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.291972] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738291971, replica_locations:[]}) [2024-09-13 13:02:18.291984] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.291993] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.292002] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.292013] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.292019] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.292026] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.292037] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:18.292046] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.292050] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:18.292057] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:18.292061] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:18.292065] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:18.292069] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:18.292075] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:18.292083] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:18.292087] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:18.292091] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:18.292096] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:18.292100] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:18.292108] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:18.292118] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.292128] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:18.292136] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:18.292146] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:18.292155] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=5, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.292175] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] will sleep(sleep_us=5000, remain_us=1967556, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.297347] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.297523] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.297541] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.297550] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.297560] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.297568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.297577] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.297588] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738297588, replica_locations:[]}) [2024-09-13 13:02:18.297600] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.297615] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:5, local_retry_times:5, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:18.297628] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.297638] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.297651] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.297659] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:18.297669] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:18.297679] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:18.297688] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.297713] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546174834, cache_obj->added_lc()=false, cache_obj->get_object_id()=6, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.298355] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.298380] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.298514] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.298687] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.298703] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.298713] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.298723] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.298728] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.298738] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.298746] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738298745, replica_locations:[]}) [2024-09-13 13:02:18.298758] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.298765] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:18.298770] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:18.298780] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:18.298785] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:18.298792] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:18.298801] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.298829] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1960902, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.305011] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.305214] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.305233] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.305242] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.305253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.305261] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.305271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.305279] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738305278, replica_locations:[]}) [2024-09-13 13:02:18.305291] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.305318] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.305327] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.305340] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.305367] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546182488, cache_obj->added_lc()=false, cache_obj->get_object_id()=7, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.305554] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20113][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4] init point thread id with(&point=0x55a3873cb8c0, idx_=3729, point=[thread id=20113, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20113) [2024-09-13 13:02:18.305610] INFO [COORDINATOR] add_failure_event (ob_failure_detector.cpp:200) [20113][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=25] success report a failure event without recover detect operation(ret=0, ret="OB_SUCCESS", event={type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}, events_with_ops_=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:18.305713] INFO [SHARE] add_event (ob_event_history_table_operator.h:266) [20113][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=36] event table add task(ret=0, event_table_name="__all_server_event_history", sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)) [2024-09-13 13:02:18.305725] INFO [COORDINATOR] insert_event_to_table_ (ob_failure_detector.cpp:309) [20113][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10] insert into __all_server_event_history success(ret=0, ret="OB_SUCCESS", event={type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}, recover_operation={this:0x2b07c6d51bd8, base:null, &allocator_:0x55a3862e79c8, &DEFAULT_ALLOCATOR:0x55a3862e79c8}) [2024-09-13 13:02:18.306140] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.306368] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.306387] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.306399] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.306410] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.306419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.306428] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.306454] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738306453, replica_locations:[]}) [2024-09-13 13:02:18.306491] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1953239, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.306899] WDIAG [COORDINATOR] detect_schema_not_refreshed_ (ob_failure_detector.cpp:465) [20113][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13][errcode=0] schema not refreshed, add failure event(schema_not_refreshed=true, now=1726203738305589) [2024-09-13 13:02:18.307009] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=5, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.307382] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_server_event_history, table_name.ptr()="data_size:26, data:5F5F616C6C5F7365727665725F6576656E745F686973746F7279", ret=-5019) [2024-09-13 13:02:18.307405] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server_event_history, ret=-5019) [2024-09-13 13:02:18.307414] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server_event_history, db_name=oceanbase) [2024-09-13 13:02:18.307423] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=8][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server_event_history) [2024-09-13 13:02:18.307429] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:18.307433] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:18.307453] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=17][errcode=-5019] Table 'oceanbase.__all_server_event_history' doesn't exist [2024-09-13 13:02:18.307459] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13282) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=5][errcode=-5019] fail to resolve basic table without cte(ret=-5019) [2024-09-13 13:02:18.307465] WDIAG [SQL.RESV] resolve_insert_field (ob_insert_resolver.cpp:437) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=3][errcode=-5019] fail to exec resolve_basic_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:18.307476] WDIAG [SQL.RESV] resolve_insert_clause (ob_insert_resolver.cpp:166) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=10][errcode=-5019] failed to resolve insert filed(ret=-5019) [2024-09-13 13:02:18.307483] WDIAG [SQL.RESV] resolve (ob_insert_resolver.cpp:96) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=6][errcode=-5019] resolve single table insert failed(ret=-5019) [2024-09-13 13:02:18.307492] WDIAG [SQL.RESV] stmt_resolver_func (ob_resolver.cpp:173) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=7][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3294) [2024-09-13 13:02:18.307509] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=15][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:18.307514] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.307522] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:18.307526] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:18.307531] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882), ret=-5019) [2024-09-13 13:02:18.307539] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:18.307543] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:18.307556] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=9][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:18.307566] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=8][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:18.307571] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:18.307583] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=11][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:18.307614] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=3][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:18.307624] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19948][EvtHisUpdTask][T1][YB42AC103323-000621F921560C7D-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.307633] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19948][EvtHisUpdTask][T0][YB42AC103323-000621F921560C7D-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)"}, aret=-5019, ret=-5019) [2024-09-13 13:02:18.307644] WDIAG [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1570) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)) [2024-09-13 13:02:18.307657] INFO [SERVER] execute_write_inner (ob_inner_sql_connection.cpp:1680) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=9] execute write sql(ret=-5019, tenant_id=1, affected_rows=0, sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)) [2024-09-13 13:02:18.307684] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:18.307694] WDIAG [SERVER] execute_write (ob_inner_sql_connection.cpp:1523) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] execute_write failed(ret=-5019, tenant_id=1, sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882), is_user_sql=false) [2024-09-13 13:02:18.307712] WDIAG [SERVER] execute_write (ob_inner_sql_connection.cpp:1512) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-5019] execute_write failed(ret=-5019, tenant_id=1, sql="INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)") [2024-09-13 13:02:18.307720] WDIAG [COMMON.MYSQLP] write (ob_mysql_proxy.cpp:156) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, conn=0x2b07a13e0060, start=1726203738305728, sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882)) [2024-09-13 13:02:18.307763] WDIAG [SHARE] process_task (ob_event_history_table_operator.cpp:329) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] execute sql failed(sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882), ret=-5019) [2024-09-13 13:02:18.307773] WDIAG [SHARE] process (ob_event_history_table_operator.cpp:173) [19948][EvtHisUpdTask][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] process_task failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=INSERT INTO __all_server_event_history (gmt_create, module, event, name1, value1, name2, value2, name3, value3, value4, value5, value6, svr_ip, svr_port) VALUES (usec_to_time(1726203738305651), 'FAILURE_DETECTOR', 'schema not refreshed', 'FAILURE_MODULE', 'SCHEMA', 'FAILURE_TYPE', 'SCHEMA NOT REFRESHED', 'AUTO_RECOVER', 'False', '', '', '', '172.16.51.35', 2882), is_delete=false, create_time_=1726203738305692, exec_tenant_id=1) [2024-09-13 13:02:18.308219] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.313661] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.313918] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.313940] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.313948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.313956] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.313962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.313968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.313979] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738313979, replica_locations:[]}) [2024-09-13 13:02:18.313991] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.314008] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.314016] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.314028] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.314056] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546191176, cache_obj->added_lc()=false, cache_obj->get_object_id()=8, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.314826] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.314967] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.314985] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.314994] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.315002] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.315011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.315017] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.315027] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738315026, replica_locations:[]}) [2024-09-13 13:02:18.315063] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1944668, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.323235] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.323456] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.323473] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.323486] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.323500] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.323507] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.323519] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.323530] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738323529, replica_locations:[]}) [2024-09-13 13:02:18.323545] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.323562] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.323569] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.323586] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.323614] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546200734, cache_obj->added_lc()=false, cache_obj->get_object_id()=9, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.324291] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.324467] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.324481] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.324494] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.324515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.324527] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.324532] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.324542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.324533] INFO [STORAGE.TRANS] statistics (ob_location_adapter.cpp:72) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] location adapter statistics(renew_access=0, total_access=1, error_count=1, renew_rate=0.000000000e+00) [2024-09-13 13:02:18.324550] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738324550, replica_locations:[]}) [2024-09-13 13:02:18.324562] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:18.324576] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:18.324584] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1935147, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.324585] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:18.325612] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.325628] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.325635] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738325599) [2024-09-13 13:02:18.331539] INFO [RPC.OBRPC] do_client_loop (ob_net_keepalive.cpp:654) [20050][KeepAliveClient][T0][Y0-0000000000000000-0-0] [lt=17] dest added, start to send keepalive data, addr : "172.16.51.35:2882" [2024-09-13 13:02:18.331619] INFO [RPC.OBRPC] check_connect (ob_net_keepalive.cpp:553) [20050][KeepAliveClient][T0][Y0-0000000000000000-0-0] [lt=15] connect ok, fd: 120, conn: "172.16.51.35:2882" [2024-09-13 13:02:18.331639] INFO [RPC.OBRPC] do_client_loop (ob_net_keepalive.cpp:654) [20050][KeepAliveClient][T0][Y0-0000000000000000-0-0] [lt=10] dest added, start to send keepalive data, addr : "172.16.51.36:2882" [2024-09-13 13:02:18.331664] INFO [RPC.OBRPC] do_client_loop (ob_net_keepalive.cpp:654) [20050][KeepAliveClient][T0][Y0-0000000000000000-0-0] [lt=6] dest added, start to send keepalive data, addr : "172.16.51.37:2882" [2024-09-13 13:02:18.331696] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] sock regist: 0x2b07b3e20740 fd=123 [2024-09-13 13:02:18.331709] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=11] [ussl] accept new connection, fd:123, src_addr:172.16.51.35:50080 [2024-09-13 13:02:18.331724] INFO acceptfd_handle_first_readable_event (handle-event.c:378) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] recv non-negotiation message, the fd will be dispatched, fd:123, src_addr:172.16.51.35:50080, magic:0x78563412 [2024-09-13 13:02:18.331733] INFO dispatch_accept_fd_to_certain_group (group.c:691) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] PNIO dispatch fd to oblistener, fd:123 [2024-09-13 13:02:18.331738] INFO [RPC] read_client_magic (ob_listener.cpp:226) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] read negotiation msg(rcv_byte=20) [2024-09-13 13:02:18.331743] INFO [RPC] read_client_magic (ob_listener.cpp:246) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] read_client_magic, (client_magic=7386785325300370467, index=0) [2024-09-13 13:02:18.331749] INFO [RPC] trace_connection_info (ob_listener.cpp:290) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] oblistener receive connection from(peer="172.16.51.35:50080") [2024-09-13 13:02:18.331756] INFO [RPC] do_one_event (ob_listener.cpp:421) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] dispatch to(client_magic=7386785325300370467, index=0) [2024-09-13 13:02:18.331760] INFO [RPC] connection_redispatch (ob_listener.cpp:268) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] dipatch(conn_fd=123, count=1, index=0, wrfd=58) [2024-09-13 13:02:18.331770] INFO [RPC] connection_redispatch (ob_listener.cpp:274) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] dispatch success!(conn_fd=123, wrfd=58) [2024-09-13 13:02:18.331790] INFO [RPC.OBRPC] do_server_loop (ob_net_keepalive.cpp:461) [20049][KeepAliveServer][T0][Y0-0000000000000000-0-0] [lt=12] new connection established, fd: 123, addr: "172.16.51.35:50080" [2024-09-13 13:02:18.333775] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.334127] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.334145] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.334155] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.334163] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.334172] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.334178] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.334115] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] sock regist: 0x2b07b3e20740 fd=124 [2024-09-13 13:02:18.334190] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738334189, replica_locations:[]}) [2024-09-13 13:02:18.334200] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.334205] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=86] [ussl] accept new connection, fd:124, src_addr:172.16.51.36:53870 [2024-09-13 13:02:18.334215] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.334224] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.334239] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.334269] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546211389, cache_obj->added_lc()=false, cache_obj->get_object_id()=10, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.335061] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.335216] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.335235] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.335244] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.335252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.335258] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.335267] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.335275] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738335274, replica_locations:[]}) [2024-09-13 13:02:18.335314] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1924416, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.345476] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.345658] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.345678] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.345686] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.345694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.345699] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.345707] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.345715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738345714, replica_locations:[]}) [2024-09-13 13:02:18.345727] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.345742] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.345748] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.345766] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.345796] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546222916, cache_obj->added_lc()=false, cache_obj->get_object_id()=11, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.346529] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.346720] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.346737] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.346746] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.346756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.346762] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.346769] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.346776] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738346776, replica_locations:[]}) [2024-09-13 13:02:18.346812] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1912919, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.347493] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A48-0-0] [lt=5][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738347013) [2024-09-13 13:02:18.347520] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A48-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738347013}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.347580] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.347595] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738347576) [2024-09-13 13:02:18.347603] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203738225716, cluster_heartbeat_interval_=100000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:18.347623] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.347631] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.347635] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738347613) [2024-09-13 13:02:18.347881] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=17] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:18.351965] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20302][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1816771166208) [2024-09-13 13:02:18.352060] INFO register_pm (ob_page_manager.cpp:40) [20302][][T0][Y0-0000000000000000-0-0] [lt=24] register pm finish(ret=0, &pm=0x2b07d7a56340, pm.get_tid()=20302, tenant_id=500) [2024-09-13 13:02:18.352087] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20302][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=188) [2024-09-13 13:02:18.352096] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20302][][T1][Y0-0000000000000000-0-0] [lt=7] Init thread local success [2024-09-13 13:02:18.352102] INFO unregister_pm (ob_page_manager.cpp:50) [20302][][T1][Y0-0000000000000000-0-0] [lt=4] unregister pm finish(&pm=0x2b07d7a56340, pm.get_tid()=20302) [2024-09-13 13:02:18.352117] INFO register_pm (ob_page_manager.cpp:40) [20302][][T1][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d7a56340, pm.get_tid()=20302, tenant_id=1) [2024-09-13 13:02:18.352227] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20303][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1821066133504) [2024-09-13 13:02:18.352317] INFO register_pm (ob_page_manager.cpp:40) [20303][][T0][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d7ad4340, pm.get_tid()=20303, tenant_id=500) [2024-09-13 13:02:18.352336] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20303][][T1][Y0-0000000000000000-0-0] [lt=12] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=189) [2024-09-13 13:02:18.352341] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20303][][T1][Y0-0000000000000000-0-0] [lt=4] Init thread local success [2024-09-13 13:02:18.352336] INFO [SERVER.OMT] check_worker_count (ob_tenant.cpp:507) [19931][pnio1][T0][YB42AC103326-00062119EC8D7B33-0-0] [lt=5] worker thread created(tenant_->id()=1, group_id_=5, token=2) [2024-09-13 13:02:18.352345] INFO unregister_pm (ob_page_manager.cpp:50) [20303][][T1][Y0-0000000000000000-0-0] [lt=3] unregister pm finish(&pm=0x2b07d7ad4340, pm.get_tid()=20303) [2024-09-13 13:02:18.352347] INFO [SERVER.OMT] recv_group_request (ob_tenant.cpp:1382) [19931][pnio1][T0][YB42AC103326-00062119EC8D7B33-0-0] [lt=11] create group successfully(id=1, group_id=5, group=0x2b07d6c42030) [2024-09-13 13:02:18.352352] INFO register_pm (ob_page_manager.cpp:40) [20303][][T1][Y0-0000000000000000-0-0] [lt=6] register pm finish(ret=0, &pm=0x2b07d7ad4340, pm.get_tid()=20303, tenant_id=1) [2024-09-13 13:02:18.352668] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20303][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B33-0-0] [lt=4] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:18.352687] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20303][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B33-0-0] [lt=18][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203738350735], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:18.353106] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC2-0-0] [lt=14][errcode=-8004] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.358035] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.358224] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.358244] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.358255] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.358267] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.358272] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.358282] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.358292] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738358292, replica_locations:[]}) [2024-09-13 13:02:18.358306] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.358339] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.358348] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.358364] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.358400] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546235518, cache_obj->added_lc()=false, cache_obj->get_object_id()=12, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.359357] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.359498] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.359519] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.359529] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.359540] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.359546] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.359555] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.359564] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738359563, replica_locations:[]}) [2024-09-13 13:02:18.359609] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1900122, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.371849] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.372036] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.372059] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.372071] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.372084] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.372090] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.372100] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.372110] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738372110, replica_locations:[]}) [2024-09-13 13:02:18.372124] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.372148] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.372154] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.372175] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.372217] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546249334, cache_obj->added_lc()=false, cache_obj->get_object_id()=13, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.373241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.373420] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.373469] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=47][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.373483] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.373492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.373497] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.373504] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.373514] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738373513, replica_locations:[]}) [2024-09-13 13:02:18.373560] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1886171, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.386745] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.386916] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.386938] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.386948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.386960] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.386968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.386975] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.386985] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738386985, replica_locations:[]}) [2024-09-13 13:02:18.386998] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.387019] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.387027] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.387074] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.387121] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546264239, cache_obj->added_lc()=false, cache_obj->get_object_id()=14, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.388001] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.388228] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.388249] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.388258] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.388269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.388276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.388285] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.388293] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738388293, replica_locations:[]}) [2024-09-13 13:02:18.388336] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1871394, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.402537] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.402818] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.402839] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.402848] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.402859] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.402865] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.402883] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.402894] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738402894, replica_locations:[]}) [2024-09-13 13:02:18.402907] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.402926] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.402934] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.402955] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.402988] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546280107, cache_obj->added_lc()=false, cache_obj->get_object_id()=15, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.403772] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.403972] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.403999] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.404011] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.404025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.404033] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.404046] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.404059] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738404058, replica_locations:[]}) [2024-09-13 13:02:18.404107] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1855623, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.419311] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.419625] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.419645] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.419655] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.419666] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.419674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.419684] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.419695] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738419695, replica_locations:[]}) [2024-09-13 13:02:18.419708] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.419726] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.419734] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.419752] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.419785] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546296904, cache_obj->added_lc()=false, cache_obj->get_object_id()=16, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.420686] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.420942] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.420962] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.420971] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.420978] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.420987] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.420996] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.421007] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738421006, replica_locations:[]}) [2024-09-13 13:02:18.421110] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1838621, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.424106] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=16] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14055901594, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:18.437309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.438009] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.438029] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.438040] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.438053] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.438059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.438068] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.438077] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738438076, replica_locations:[]}) [2024-09-13 13:02:18.438090] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.438109] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.438117] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.438135] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.438172] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546315290, cache_obj->added_lc()=false, cache_obj->get_object_id()=17, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.439058] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.439232] WDIAG [RPC] wait (ob_async_rpc_proxy.h:422) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] execute rpc failed(rc=-5150, server="172.16.51.37:2882", timeout=2000000, packet code=330, arg={addr:"172.16.51.37:2882", cluster_id:1726203323}) [2024-09-13 13:02:18.439251] WDIAG [SHARE.PT] do_detect_master_rs_ls_ (ob_rpc_ls_table.cpp:315) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=0] fail to get result by rpc, just ignore(tmp_ret=-5150, addr="172.16.51.37:2882") [2024-09-13 13:02:18.439260] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.439272] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.439278] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.439285] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.439292] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738439291, replica_locations:[]}) [2024-09-13 13:02:18.439332] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1820399, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.447505] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A49-0-0] [lt=34][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738447085) [2024-09-13 13:02:18.447533] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A49-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738447085}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.447580] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.447592] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.447598] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738447565) [2024-09-13 13:02:18.449451] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] sock regist: 0x2b07b3e211a0 fd=125 [2024-09-13 13:02:18.449471] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=17] [ussl] accept new connection, fd:125, src_addr:172.16.51.37:59340 [2024-09-13 13:02:18.449495] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] auth mothod is NONE, the fd will be dispatched, fd:125, src_addr:172.16.51.37:59340 [2024-09-13 13:02:18.449504] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] PNIO dispatch fd to certain group, fd:125, gid:0x100000000 [2024-09-13 13:02:18.449549] INFO pkts_sk_init (pkts_sk_factory.h:23) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=10] PNIO set pkts_sk_t sock_id s=0x2b07b0be6048, s->id=65532 [2024-09-13 13:02:18.449566] INFO pkts_sk_new (pkts_sk_factory.h:51) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=16] PNIO sk_new: s=0x2b07b0be6048 [2024-09-13 13:02:18.449583] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=10] PNIO sock regist: 0x2b07b0be6048 fd=125 [2024-09-13 13:02:18.449593] INFO on_accept (listenfd.c:39) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO accept new connection, ns=0x2b07b0be6048, fd=fd:125:local:"172.16.51.37:59340":remote:"172.16.51.37:59340" [2024-09-13 13:02:18.449659] WDIAG listenfd_handle_event (listenfd.c:71) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=5][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:18.450510] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] sock regist: 0x2b07b3e211a0 fd=126 [2024-09-13 13:02:18.450523] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=12] [ussl] accept new connection, fd:126, src_addr:172.16.51.37:59342 [2024-09-13 13:02:18.450535] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] auth mothod is NONE, the fd will be dispatched, fd:126, src_addr:172.16.51.37:59342 [2024-09-13 13:02:18.450544] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] PNIO dispatch fd to certain group, fd:126, gid:0x100000001 [2024-09-13 13:02:18.450570] INFO pkts_sk_init (pkts_sk_factory.h:23) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO set pkts_sk_t sock_id s=0x2b07b0be6a58, s->id=65532 [2024-09-13 13:02:18.450587] INFO pkts_sk_new (pkts_sk_factory.h:51) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=16] PNIO sk_new: s=0x2b07b0be6a58 [2024-09-13 13:02:18.450597] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock regist: 0x2b07b0be6a58 fd=126 [2024-09-13 13:02:18.450624] INFO on_accept (listenfd.c:39) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=22] PNIO accept new connection, ns=0x2b07b0be6a58, fd=fd:126:local:"172.16.51.37:59342":remote:"172.16.51.37:59342" [2024-09-13 13:02:18.450665] WDIAG listenfd_handle_event (listenfd.c:71) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=12][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:18.450673] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.452637] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.452931] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.456516] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.456790] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.456809] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.456815] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.456826] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.456835] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738456834, replica_locations:[]}) [2024-09-13 13:02:18.456845] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.456866] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.456892] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.456925] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.456957] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546334076, cache_obj->added_lc()=false, cache_obj->get_object_id()=18, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.457727] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.458061] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.458081] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.458102] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.458112] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.458122] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738458122, replica_locations:[]}) [2024-09-13 13:02:18.458162] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1801569, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.461375] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=12] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:18.476355] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.476632] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.476651] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.476658] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.476666] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.476676] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738476675, replica_locations:[]}) [2024-09-13 13:02:18.476689] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.476709] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.477620] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.477920] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.477939] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.477945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.477955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.477963] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738477962, replica_locations:[]}) [2024-09-13 13:02:18.478004] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1781727, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.497206] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.497491] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.497519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.497532] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.497548] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.497565] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738497564, replica_locations:[]}) [2024-09-13 13:02:18.497586] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.497612] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.498749] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.498998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.499046] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.499055] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.499063] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.499073] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738499072, replica_locations:[]}) [2024-09-13 13:02:18.499121] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1760609, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.505427] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] sock regist: 0x2b07b3e211a0 fd=127 [2024-09-13 13:02:18.505457] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=26] [ussl] accept new connection, fd:127, src_addr:172.16.51.37:59348 [2024-09-13 13:02:18.519358] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.519659] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.519680] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.519687] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.519695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.519706] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738519705, replica_locations:[]}) [2024-09-13 13:02:18.519721] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.519745] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.520957] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.521050] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.521071] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.521077] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.521088] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.521099] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738521098, replica_locations:[]}) [2024-09-13 13:02:18.521151] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1738579, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.531784] INFO [RPC.OBRPC] check_connect (ob_net_keepalive.cpp:553) [20050][KeepAliveClient][T0][Y0-0000000000000000-0-0] [lt=6] connect ok, fd: 121, conn: "172.16.51.36:2882" [2024-09-13 13:02:18.531814] INFO [RPC.OBRPC] check_connect (ob_net_keepalive.cpp:553) [20050][KeepAliveClient][T0][Y0-0000000000000000-0-0] [lt=18] connect ok, fd: 122, conn: "172.16.51.37:2882" [2024-09-13 13:02:18.534108] INFO acceptfd_handle_first_readable_event (handle-event.c:378) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] recv non-negotiation message, the fd will be dispatched, fd:124, src_addr:172.16.51.36:53870, magic:0x78563412 [2024-09-13 13:02:18.534123] INFO dispatch_accept_fd_to_certain_group (group.c:691) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=12] PNIO dispatch fd to oblistener, fd:124 [2024-09-13 13:02:18.534129] INFO [RPC] read_client_magic (ob_listener.cpp:226) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] read negotiation msg(rcv_byte=19) [2024-09-13 13:02:18.534135] INFO [RPC] read_client_magic (ob_listener.cpp:246) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] read_client_magic, (client_magic=7386785325300370467, index=0) [2024-09-13 13:02:18.534144] INFO [RPC] trace_connection_info (ob_listener.cpp:290) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] oblistener receive connection from(peer="172.16.51.36:53870") [2024-09-13 13:02:18.534152] INFO [RPC] do_one_event (ob_listener.cpp:421) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] dispatch to(client_magic=7386785325300370467, index=0) [2024-09-13 13:02:18.534156] INFO [RPC] connection_redispatch (ob_listener.cpp:268) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] dipatch(conn_fd=124, count=1, index=0, wrfd=58) [2024-09-13 13:02:18.534168] INFO [RPC] connection_redispatch (ob_listener.cpp:274) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] dispatch success!(conn_fd=124, wrfd=58) [2024-09-13 13:02:18.534189] INFO [RPC.OBRPC] do_server_loop (ob_net_keepalive.cpp:461) [20049][KeepAliveServer][T0][Y0-0000000000000000-0-0] [lt=8] new connection established, fd: 124, addr: "172.16.51.36:53870" [2024-09-13 13:02:18.542397] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.542815] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.542835] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.542842] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.542851] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.542865] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738542864, replica_locations:[]}) [2024-09-13 13:02:18.542901] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=34] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.542930] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.544066] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.544293] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.544310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.544317] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.544327] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.544338] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738544337, replica_locations:[]}) [2024-09-13 13:02:18.544386] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=22000, remain_us=1715345, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.547639] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.547664] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738547631) [2024-09-13 13:02:18.547676] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203738347612, cluster_heartbeat_interval_=200000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:18.547701] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.547708] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.547715] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738547686) [2024-09-13 13:02:18.547728] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4A-0-0] [lt=25][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738547161) [2024-09-13 13:02:18.547768] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.547773] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.547778] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738547764) [2024-09-13 13:02:18.547760] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4A-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738547161}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.566599] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.566886] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.566906] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.566914] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.566925] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.566939] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738566938, replica_locations:[]}) [2024-09-13 13:02:18.566953] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.566976] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.568064] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.568253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.568272] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.568278] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.568285] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.568294] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738568293, replica_locations:[]}) [2024-09-13 13:02:18.568343] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1691388, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.591632] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.591922] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.591945] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.591952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.591960] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.591976] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738591975, replica_locations:[]}) [2024-09-13 13:02:18.591992] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.592017] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.593309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.593484] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.593504] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.593510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.593518] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.593528] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738593527, replica_locations:[]}) [2024-09-13 13:02:18.593582] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1666148, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.613907] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=47] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:18.617829] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.618097] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.618134] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.618144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.618155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.618173] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738618171, replica_locations:[]}) [2024-09-13 13:02:18.618194] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.618244] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.619626] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.619833] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.619859] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.619873] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.619900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.619914] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738619913, replica_locations:[]}) [2024-09-13 13:02:18.619981] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1639750, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.624472] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14051707290, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:18.645279] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.645601] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.645636] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.645648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.645665] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.645687] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738645686, replica_locations:[]}) [2024-09-13 13:02:18.645711] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.645745] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.647203] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.647525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.647553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.647567] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.647582] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.647600] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738647599, replica_locations:[]}) [2024-09-13 13:02:18.647647] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4B-0-0] [lt=25][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738647229) [2024-09-13 13:02:18.647669] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1612061, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.647669] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4B-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738647229}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.647702] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.647717] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.647726] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738647684) [2024-09-13 13:02:18.661487] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:18.667578] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=18][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.667613] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=24][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=379266) [2024-09-13 13:02:18.667635] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=17][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:18.667649] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:923) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=11][errcode=-4012] exec base before process failed(ret=-4012) [2024-09-13 13:02:18.667659] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEE-0-0] [lt=9][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:18.667852] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=7][errcode=0] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.668801] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=14][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.670125] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=7][errcode=0] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:18.671077] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.673935] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.674264] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.674290] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.674301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.674315] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.674332] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738674330, replica_locations:[]}) [2024-09-13 13:02:18.674351] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.674381] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.675829] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.676089] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.676112] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.676166] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=52] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.676179] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.676196] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738676195, replica_locations:[]}) [2024-09-13 13:02:18.676264] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=27000, remain_us=1583467, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.703540] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.703840] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.703867] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.703889] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.703902] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.703919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738703918, replica_locations:[]}) [2024-09-13 13:02:18.703939] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.704065] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.705099] INFO acceptfd_handle_first_readable_event (handle-event.c:378) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] recv non-negotiation message, the fd will be dispatched, fd:127, src_addr:172.16.51.37:59348, magic:0x78563412 [2024-09-13 13:02:18.705117] INFO dispatch_accept_fd_to_certain_group (group.c:691) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=15] PNIO dispatch fd to oblistener, fd:127 [2024-09-13 13:02:18.705126] INFO [RPC] read_client_magic (ob_listener.cpp:226) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] read negotiation msg(rcv_byte=19) [2024-09-13 13:02:18.705135] INFO [RPC] read_client_magic (ob_listener.cpp:246) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=9] read_client_magic, (client_magic=7386785325300370467, index=0) [2024-09-13 13:02:18.705143] INFO [RPC] trace_connection_info (ob_listener.cpp:290) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] oblistener receive connection from(peer="172.16.51.37:59348") [2024-09-13 13:02:18.705150] INFO [RPC] do_one_event (ob_listener.cpp:421) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] dispatch to(client_magic=7386785325300370467, index=0) [2024-09-13 13:02:18.705157] INFO [RPC] connection_redispatch (ob_listener.cpp:268) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] dipatch(conn_fd=127, count=1, index=0, wrfd=58) [2024-09-13 13:02:18.705171] INFO [RPC] connection_redispatch (ob_listener.cpp:274) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=8] dispatch success!(conn_fd=127, wrfd=58) [2024-09-13 13:02:18.705198] INFO [RPC.OBRPC] do_server_loop (ob_net_keepalive.cpp:461) [20049][KeepAliveServer][T0][Y0-0000000000000000-0-0] [lt=8] new connection established, fd: 127, addr: "172.16.51.37:59348" [2024-09-13 13:02:18.705565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.705776] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.705802] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.705816] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.705832] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.705850] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738705849, replica_locations:[]}) [2024-09-13 13:02:18.705940] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=28000, remain_us=1553791, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.715402] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=21][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:0, dropped:32, tid:19886}]) [2024-09-13 13:02:18.725173] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:18.725233] INFO [SERVER] check_config_mem_limit (ob_eliminate_task.cpp:79) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=23] change config mem limit(config_mem_limit_=16777216, mem_limit=96636764, tenant_id=1) [2024-09-13 13:02:18.725245] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:172) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Eliminate task evict sql audit(request_manager_->get_tenant_id()=1, queue_size=524288, config_mem_limit_=96636764, request_manager_->get_size_used()=0, evict_high_size_level=471859, evict_low_size_level=419430, allocator->allocated()=6239232, evict_high_mem_level=75665245, evict_low_mem_level=54693724) [2024-09-13 13:02:18.725260] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=14] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=6239232) [2024-09-13 13:02:18.734233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.734470] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.734501] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.734508] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.734522] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.734536] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738734535, replica_locations:[]}) [2024-09-13 13:02:18.734553] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.734577] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.734586] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.734608] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.734670] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546611774, cache_obj->added_lc()=false, cache_obj->get_object_id()=29, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.735699] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.735908] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.735927] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.735933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.735944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.735956] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738735955, replica_locations:[]}) [2024-09-13 13:02:18.736009] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1523722, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.747727] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4C-0-0] [lt=37][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738747295) [2024-09-13 13:02:18.747753] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.747757] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4C-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738747295}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.747799] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738747746) [2024-09-13 13:02:18.747814] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203738547686, cluster_heartbeat_interval_=400000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:18.747835] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.747844] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.747853] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738747823) [2024-09-13 13:02:18.747865] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.747871] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.747888] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738747862) [2024-09-13 13:02:18.765227] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.765487] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.765511] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.765518] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.765527] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.765540] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738765539, replica_locations:[]}) [2024-09-13 13:02:18.765555] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.765578] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.765587] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.765612] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.765659] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546642776, cache_obj->added_lc()=false, cache_obj->get_object_id()=30, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.766659] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.766856] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.766886] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.766893] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.766906] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.766921] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738766920, replica_locations:[]}) [2024-09-13 13:02:18.766973] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1492757, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.797285] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.797607] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.797639] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.797650] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.797665] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.797685] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738797684, replica_locations:[]}) [2024-09-13 13:02:18.797706] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.797739] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.797758] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.797784] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.797841] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546674954, cache_obj->added_lc()=false, cache_obj->get_object_id()=31, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.799204] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.799477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.799499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.799506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.799514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.799524] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738799524, replica_locations:[]}) [2024-09-13 13:02:18.799583] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1460147, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.815556] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=27][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-8004, dropped:7, tid:19931}]) [2024-09-13 13:02:18.824811] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14049610138, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:18.825108] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:18.830955] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.831249] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.831279] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.831293] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.831310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.831329] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738831328, replica_locations:[]}) [2024-09-13 13:02:18.831357] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.831409] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.831424] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.831468] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.831527] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546708640, cache_obj->added_lc()=false, cache_obj->get_object_id()=32, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.832819] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.833184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.833210] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.833222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.833234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.833266] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738833265, replica_locations:[]}) [2024-09-13 13:02:18.833336] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1426395, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.847772] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4D-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738847362) [2024-09-13 13:02:18.847800] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4D-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738847362}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.847832] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.847847] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.847856] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738847818) [2024-09-13 13:02:18.851647] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B34-0-0] [lt=5] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:18.851667] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B34-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203738851195], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:18.852133] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC3-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203738851780, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62034993, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203738851491}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:18.852172] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC3-0-0] [lt=40][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.852738] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC3-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:18.861585] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=29] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:18.865596] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.866115] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.866140] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.866154] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.866169] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.866187] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738866185, replica_locations:[]}) [2024-09-13 13:02:18.866208] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.866237] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.866249] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.866277] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.866332] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546743445, cache_obj->added_lc()=false, cache_obj->get_object_id()=33, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.867615] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.867835] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.867862] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.867888] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.867903] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.867919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738867918, replica_locations:[]}) [2024-09-13 13:02:18.867981] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=33000, remain_us=1391750, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.872274] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.872409] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=11] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.872811] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:18.881200] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=14][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:18.881227] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=26][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=603884) [2024-09-13 13:02:18.881238] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=10][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:18.881247] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:923) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=8][errcode=-4012] exec base before process failed(ret=-4012) [2024-09-13 13:02:18.881263] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20288][T1_L0_G0][T1][YB42AC103326-00062119D8E48924-0-0] [lt=15][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:18.901218] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.901505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.901531] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.901544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.901559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.901577] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738901576, replica_locations:[]}) [2024-09-13 13:02:18.901597] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.901625] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.901637] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.901666] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.901721] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546778834, cache_obj->added_lc()=false, cache_obj->get_object_id()=34, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.903020] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.903231] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.903256] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.903266] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.903281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.903296] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738903295, replica_locations:[]}) [2024-09-13 13:02:18.903357] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1356373, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.937636] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.937917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.937946] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.937957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.937971] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.937990] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738937989, replica_locations:[]}) [2024-09-13 13:02:18.938010] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.938039] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.938051] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.938080] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.938138] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546815251, cache_obj->added_lc()=false, cache_obj->get_object_id()=35, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.939431] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.939636] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.939661] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.939672] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.939687] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.939703] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738939702, replica_locations:[]}) [2024-09-13 13:02:18.939777] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=35000, remain_us=1319954, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:18.945342] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=24][errcode=0] server is initiating(server_id=0, local_seq=8, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:18.945372] INFO [RPC.OBMYSQL] create_scramble_string (obsm_conn_callback.cpp:61) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=26] init thread_rand succ(ret=0) [2024-09-13 13:02:18.945379] INFO [RPC.OBMYSQL] sm_conn_build_handshake (obsm_conn_callback.cpp:121) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=6] new mysql sessid created(conn.sessid_=3221225480, support_ssl=false) [2024-09-13 13:02:18.945405] INFO [RPC.OBMYSQL] init (obsm_conn_callback.cpp:141) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=6] sm conn init succ(conn.sessid_=3221225480, sess.client_addr_="172.16.51.35:34374") [2024-09-13 13:02:18.945418] INFO [RPC.OBMYSQL] do_accept_one (ob_sql_nio.cpp:1089) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=10] accept one succ(*s={this:0x2b07baffef30, session_id:3221225480, trace_id:Y0-0000000000000000-0-0, sql_handling_stage:-1, sql_initiative_shutdown:false, reader:{fd:128}, err:0, last_decode_time:0, pending_write_task:{buf:null, sz:0}, need_epoll_trigger_write:false, consume_size:0, pending_flag:0, may_handling_flag:true, handler_close_flag:false}) [2024-09-13 13:02:18.945743] INFO [SERVER] extract_user_tenant (obmp_connect.cpp:83) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=36] username and tenantname(user_name=root, tenant_name=) [2024-09-13 13:02:18.945777] INFO [SERVER] dispatch_req (ob_srv_deliver.cpp:285) [20053][sql_nio2][T1][Y0-0000000000000000-0-0] [lt=16] succeed to dispatch to tenant mysql queue(tenant_id=1) [2024-09-13 13:02:18.945788] INFO [SERVER] dispatch_req (ob_srv_deliver.cpp:290) [20053][sql_nio2][T1][Y0-0000000000000000-0-0] [lt=11] mysql login queue(mysql_queue->queue_.size()=0) [2024-09-13 13:02:18.945853] INFO [SERVER] verify_connection (obmp_connect.cpp:2037) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=3] server is initializing, ignore verify_ip_white_list(status=1, ret=0) [2024-09-13 13:02:18.947872] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4E-0-0] [lt=13][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738947431) [2024-09-13 13:02:18.947894] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:18.947914] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203738947887) [2024-09-13 13:02:18.947901] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4E-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203738947431}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:18.947923] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203738747820, cluster_heartbeat_interval_=800000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:18.947944] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.947950] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.947955] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738947932) [2024-09-13 13:02:18.947966] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.947970] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:18.947973] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203738947962) [2024-09-13 13:02:18.975059] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.975378] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.975407] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.975422] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.975443] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.975464] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738975463, replica_locations:[]}) [2024-09-13 13:02:18.975492] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:18.975524] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:18.975537] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:18.975565] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:18.975621] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546852735, cache_obj->added_lc()=false, cache_obj->get_object_id()=36, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:18.976959] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:18.977169] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.977194] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:18.977208] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:18.977223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:18.977238] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203738977237, replica_locations:[]}) [2024-09-13 13:02:18.977380] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1282350, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.013680] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.014018] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.014048] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.014059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.014075] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.014095] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739014093, replica_locations:[]}) [2024-09-13 13:02:19.014117] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.014151] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.014163] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.014267] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.014328] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546891441, cache_obj->added_lc()=false, cache_obj->get_object_id()=37, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.015722] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=111][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.015963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.015990] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.016000] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.016015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.016032] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739016031, replica_locations:[]}) [2024-09-13 13:02:19.016100] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1243630, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.025150] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=30] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14045415834, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:19.036588] INFO load_privilege_info (obmp_connect.cpp:573) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=31] no tenant name set, use default tenant name(tenant_name=sys) [2024-09-13 13:02:19.037275] WDIAG [SHARE.SCHEMA] check_user_access (ob_schema_getter_guard.cpp:2915) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=25][errcode=-4043] password error(ret=-4043, ret="OB_PASSWORD_WRONG", login_info.passwd_.length()=20, user_info->get_passwd_str().length()=0) [2024-09-13 13:02:19.037305] WDIAG [SERVER] load_privilege_info (obmp_connect.cpp:834) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=19][errcode=-4043] User access denied(login_info={tenant_name:"sys", user_name:"root", proxied_user_name:"", client_ip:"172.16.51.35", db:"", scramble_str:"}VwqT03Kb+N({V{+^zl0"}, ret=-4043) [2024-09-13 13:02:19.037365] WDIAG [SERVER] verify_identify (obmp_connect.cpp:2135) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=51][errcode=-4043] load privilege info fail(pre_ret=-4043, ret=-4043, GCTX.status_=1) [2024-09-13 13:02:19.037386] WDIAG [SERVER] process (obmp_connect.cpp:373) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=14][errcode=-4043] fail to verify_identify(ret=-4043) [2024-09-13 13:02:19.037691] INFO [SERVER] send_error_packet (obmp_packet_sender.cpp:368) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=8] sending error packet(ob_error=-4043, client error=1045, extra_err_info=NULL, lbt()="0x24edc06b 0xb37d670 0xb32cd3e 0x254594c4 0x24d0f69a 0x24e0a83c 0x24e0a3a6 0x24e09e4c 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.037771] INFO [SERVER] free_session (obmp_base.cpp:324) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=16] free session successfully(ctx={has_inc_active_num:false, tenant_id:1, sessid:3221225480, proxy_sessid:0}) [2024-09-13 13:02:19.037806] WDIAG [SERVER] disconnect (obmp_packet_sender.cpp:832) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=18][errcode=0] server close connection(sessid=3221225480, proxy_sessid=0, stack="0x24edc06b 0xb380e00 0x254454a7 0x25458fc4 0x24d0f69a 0x24e0a83c 0x24e0a3a6 0x24e09e4c 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.037817] WDIAG [SERVER] get_session (obmp_packet_sender.cpp:594) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=10][errcode=-4018] get session fail(ret=-4018, sessid=3221225480, proxy_sessid=0) [2024-09-13 13:02:19.037824] WDIAG [SERVER] disconnect (obmp_packet_sender.cpp:836) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=5][errcode=-4016] session is null [2024-09-13 13:02:19.037831] INFO [SERVER] process (obmp_connect.cpp:514) [20237][T1_MysqlQueueTh][T1][Y0-000621F921660C7D-0-0] [lt=4] MySQL LOGIN(direct_client_ip="172.16.51.35", client_ip=172.16.51.35, tenant_name=sys, tenant_id=1, user_name=root, host_name=xxx.xxx.xxx.xxx, sessid=3221225480, proxy_sessid=0, sess_create_time=0, from_proxy=false, from_java_client=false, from_oci_client=false, from_jdbc_client=false, capability=3908101, proxy_capability=0, use_ssl=false, c/s protocol="OB_MYSQL_CS_TYPE", autocommit=false, proc_ret=-4043, ret=0, conn->client_type_=3, conn->client_version_=0) [2024-09-13 13:02:19.037921] WDIAG [RPC.OBMYSQL] push_close_req (ob_sql_nio.cpp:879) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4015] close sql sock by user req(*s={this:0x2b07baffef30, session_id:3221225480, trace_id:Y0-0000000000000000-0-0, sql_handling_stage:256, sql_initiative_shutdown:true, reader:{fd:128}, err:5, last_decode_time:1726203738945045, pending_write_task:{buf:null, sz:0}, need_epoll_trigger_write:false, consume_size:139, pending_flag:1, may_handling_flag:true, handler_close_flag:false}) [2024-09-13 13:02:19.037954] INFO [RPC.OBMYSQL] on_disconnect (obsm_conn_callback.cpp:268) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=28] kill and revert session(conn.sessid_=3221225480, proxy_sessid=0, server_id=0, ret=0) [2024-09-13 13:02:19.037968] INFO [RPC.OBMYSQL] handle_pending_destroy_list (ob_sql_nio.cpp:985) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=10] can close safely, do destroy(*s={this:0x2b07baffef30, session_id:3221225480, trace_id:Y0-0000000000000000-0-0, sql_handling_stage:256, sql_initiative_shutdown:true, reader:{fd:128}, err:5, last_decode_time:1726203738945045, pending_write_task:{buf:null, sz:0}, need_epoll_trigger_write:false, consume_size:139, pending_flag:1, may_handling_flag:false, handler_close_flag:false}) [2024-09-13 13:02:19.037980] INFO [RPC.OBMYSQL] destroy (obsm_conn_callback.cpp:243) [20053][sql_nio2][T0][Y0-0000000000000000-0-0] [lt=11] connection close(sessid=3221225480, proxy_sessid=0, tenant_id=1, server_id=0, from_proxy=false, from_java_client=false, c/s protocol="OB_MYSQL_CS_TYPE", is_need_clear_sessid_=true, is_sess_alloc_=true, ret=0, trace_id=Y0-0000000000000000-0-0, conn.pkt_rec_wrapper_=[start_pkt_pos_:0, cur_pkt_pos_:1, pkt_rec[0]:{send:obp_mysql_header_{len_:82, seq_:3}, pkt_name:"PKT_ERR", obp_mysql_header_.is_send_:1}], disconnect_state=0) [2024-09-13 13:02:19.046995] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=20][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:19.047018] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=21][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=738081) [2024-09-13 13:02:19.047037] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=18][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:19.047055] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:923) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=16][errcode=-4012] exec base before process failed(ret=-4012) [2024-09-13 13:02:19.047063] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20291][T1_L0_G0][T1][YB42AC103326-00062119D7A51A91-0-0] [lt=7][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:19.047914] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4F-0-0] [lt=24][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739047500) [2024-09-13 13:02:19.047945] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A4F-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739047500}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.047962] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.047983] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739047955) [2024-09-13 13:02:19.047995] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203738947930, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.048024] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.048040] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.048047] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739048009) [2024-09-13 13:02:19.053371] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.053689] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.053712] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.053732] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739053731, replica_locations:[]}) [2024-09-13 13:02:19.053755] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.053784] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.053797] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.053824] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.053890] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546930992, cache_obj->added_lc()=false, cache_obj->get_object_id()=38, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.055174] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.055396] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.055417] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.055430] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739055429, replica_locations:[]}) [2024-09-13 13:02:19.055512] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=38000, remain_us=1204218, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.061673] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:19.092904] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.093781] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.093928] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.093943] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=8] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.094031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.094054] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.094074] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739094073, replica_locations:[]}) [2024-09-13 13:02:19.094096] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.094125] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.094138] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.094173] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.094229] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6546971342, cache_obj->added_lc()=false, cache_obj->get_object_id()=39, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.094368] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.094378] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=6] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.094392] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=7] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.094406] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.094722] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=42] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.095562] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.095779] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.095818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.095840] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.095856] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739095856, replica_locations:[]}) [2024-09-13 13:02:19.095933] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1163798, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.109061] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO [ratelimit] time: 1726203739109059, bytes: 2250110, bw: 0.252029 MB/s, add_ts: 1001231, add_bytes: 264597 [2024-09-13 13:02:19.118106] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=16] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:19.125777] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2155-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.126470] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2159-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.126947] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB215A-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.127213] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20320][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1825361100800) [2024-09-13 13:02:19.127294] INFO register_pm (ob_page_manager.cpp:40) [20320][][T0][Y0-0000000000000000-0-0] [lt=20] register pm finish(ret=0, &pm=0x2b07d7b52340, pm.get_tid()=20320, tenant_id=500) [2024-09-13 13:02:19.127328] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20320][][T1][Y0-0000000000000000-0-0] [lt=18] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=190) [2024-09-13 13:02:19.127339] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20320][][T1][Y0-0000000000000000-0-0] [lt=9] Init thread local success [2024-09-13 13:02:19.127356] INFO unregister_pm (ob_page_manager.cpp:50) [20320][][T1][Y0-0000000000000000-0-0] [lt=14] unregister pm finish(&pm=0x2b07d7b52340, pm.get_tid()=20320) [2024-09-13 13:02:19.127373] INFO register_pm (ob_page_manager.cpp:40) [20320][][T1][Y0-0000000000000000-0-0] [lt=14] register pm finish(ret=0, &pm=0x2b07d7b52340, pm.get_tid()=20320, tenant_id=1) [2024-09-13 13:02:19.127501] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20321][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1829656068096) [2024-09-13 13:02:19.127557] INFO register_pm (ob_page_manager.cpp:40) [20321][][T0][Y0-0000000000000000-0-0] [lt=18] register pm finish(ret=0, &pm=0x2b07d7bd0340, pm.get_tid()=20321, tenant_id=500) [2024-09-13 13:02:19.127580] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20321][][T1][Y0-0000000000000000-0-0] [lt=15] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=191) [2024-09-13 13:02:19.127581] INFO [SERVER.OMT] check_worker_count (ob_tenant.cpp:507) [19932][pnio1][T0][YB42AC103326-00062119ED62FC74-0-0] [lt=10] worker thread created(tenant_->id()=1, group_id_=10, token=2) [2024-09-13 13:02:19.127587] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20321][][T1][Y0-0000000000000000-0-0] [lt=6] Init thread local success [2024-09-13 13:02:19.127593] INFO [SERVER.OMT] recv_group_request (ob_tenant.cpp:1382) [19932][pnio1][T0][YB42AC103326-00062119ED62FC74-0-0] [lt=12] create group successfully(id=1, group_id=10, group=0x2b07d6eb6030) [2024-09-13 13:02:19.127593] INFO unregister_pm (ob_page_manager.cpp:50) [20321][][T1][Y0-0000000000000000-0-0] [lt=5] unregister pm finish(&pm=0x2b07d7bd0340, pm.get_tid()=20321) [2024-09-13 13:02:19.127632] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC74-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.127604] INFO register_pm (ob_page_manager.cpp:40) [20321][][T1][Y0-0000000000000000-0-0] [lt=9] register pm finish(ret=0, &pm=0x2b07d7bd0340, pm.get_tid()=20321, tenant_id=1) [2024-09-13 13:02:19.127663] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB215E-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.127926] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB215F-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.128387] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2163-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.128602] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2164-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.129019] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2168-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.129250] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2169-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.129613] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB216D-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.135203] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.135404] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.135429] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.135457] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739135456, replica_locations:[]}) [2024-09-13 13:02:19.135479] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.135508] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.135521] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.135549] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.135603] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547012717, cache_obj->added_lc()=false, cache_obj->get_object_id()=40, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.136358] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=35] PNIO [ratelimit] time: 1726203739136356, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007614, add_bytes: 0 [2024-09-13 13:02:19.136965] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.137136] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.137160] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.137177] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739137176, replica_locations:[]}) [2024-09-13 13:02:19.137241] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=40000, remain_us=1122490, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.147987] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A50-0-0] [lt=31][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739147568) [2024-09-13 13:02:19.148013] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A50-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739147568}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.148027] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.148045] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739148020) [2024-09-13 13:02:19.148057] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739048006, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.148082] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.148091] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.148097] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739148069) [2024-09-13 13:02:19.177488] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.177750] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.177776] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.177792] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739177791, replica_locations:[]}) [2024-09-13 13:02:19.177814] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.177843] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.177856] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.177899] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.177973] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547055086, cache_obj->added_lc()=false, cache_obj->get_object_id()=41, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.179226] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.179251] INFO pktc_sk_new (pktc_sk_factory.h:78) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=10] PNIO sk_new: s=0x2b07b0be7468 [2024-09-13 13:02:19.179265] INFO pktc_sk_new (pktc_sk_factory.h:78) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO sk_new: s=0x2b07b0be8048 [2024-09-13 13:02:19.179300] INFO pktc_do_connect (pktc_post.h:19) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=11] PNIO sk_new: sk=0x2b07b0be7468, fd=129 [2024-09-13 13:02:19.179316] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=10] [ussl] write client fd succ, fd:129, gid:0x100000001, need_send_negotiation:1 [2024-09-13 13:02:19.179321] INFO pktc_do_connect (pktc_post.h:19) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=21] PNIO sk_new: sk=0x2b07b0be8048, fd=130 [2024-09-13 13:02:19.179324] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sock regist: 0x2b07b0be7468 fd=129 [2024-09-13 13:02:19.179330] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock not ready: 0x2b07b0be7468, fd=129 [2024-09-13 13:02:19.179332] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] write client fd succ, fd:130, gid:0x100000002, need_send_negotiation:1 [2024-09-13 13:02:19.179339] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sock regist: 0x2b07b0be8048 fd=130 [2024-09-13 13:02:19.179348] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock not ready: 0x2b07b0be8048, fd=130 [2024-09-13 13:02:19.179355] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] sock regist: 0x2b07b3e216d0 fd=129 [2024-09-13 13:02:19.179362] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] sock regist: 0x2b07b3e217b0 fd=130 [2024-09-13 13:02:19.179644] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] client send negotiation message succ, fd:129, addr:"172.16.51.35:38138", auth_method:NONE, gid:0x100000001 [2024-09-13 13:02:19.179659] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=10] [ussl] give back fd to origin epoll succ, client_fd:129, client_epfd:72, event:0x8000000d, client_addr:"172.16.51.35:38138", need_close:0 [2024-09-13 13:02:19.179671] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sock connect OK: 0x2b07b0be7468 fd:129:local:"172.16.51.36:2882":remote:"172.16.51.36:2882" [2024-09-13 13:02:19.179900] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] client send negotiation message succ, fd:130, addr:"172.16.51.35:55174", auth_method:NONE, gid:0x100000002 [2024-09-13 13:02:19.179911] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] give back fd to origin epoll succ, client_fd:130, client_epfd:79, event:0x8000000d, client_addr:"172.16.51.35:55174", need_close:0 [2024-09-13 13:02:19.179922] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO sock connect OK: 0x2b07b0be8048 fd:130:local:"172.16.51.37:2882":remote:"172.16.51.37:2882" [2024-09-13 13:02:19.180376] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.180399] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.180413] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739180412, replica_locations:[]}) [2024-09-13 13:02:19.180495] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1079236, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.182387] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782D9-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.204890] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6] init point thread id with(&point=0x55a3873cb840, idx_=3727, point=[thread id=20111, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20111) [2024-09-13 13:02:19.204931] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=37] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:19.221478] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.221678] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.221830] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=10][errcode=-4018] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:19.221862] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.221872] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.221996] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=20][errcode=0] server is initiating(server_id=0, local_seq=9, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:19.223250] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=16] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:19.223275] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=22][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:19.223283] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:19.223289] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:19.223296] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:19.223304] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:19.223310] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:19.223318] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:19.223322] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:19.223334] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=12][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:19.223339] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:19.223344] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:19.223349] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:19.223354] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:19.223367] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:19.223374] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.223380] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.223386] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=5][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:19.223393] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:19.223402] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=8][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:19.223407] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.223426] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=15][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:19.223448] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=19][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:19.223455] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:19.223458] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:19.223482] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:19.223491] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.223496] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:19.223504] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:19.223509] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:19.223514] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=5][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:19.223519] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203739222959, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:19.223533] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=14][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:19.223543] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=6][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:19.223608] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=11][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:19.223619] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=9][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:19.223626] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=6][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:19.223633] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=5][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:19.223643] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=7][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:19.223654] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=9][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:19.223754] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7D-0-0] [lt=96][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:19.223773] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.223792] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.223806] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739223805, replica_locations:[]}) [2024-09-13 13:02:19.223822] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.223844] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.223853] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.223871] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.223921] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547101038, cache_obj->added_lc()=false, cache_obj->get_object_id()=42, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.224980] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.225263] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:19.225326] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=14] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=8318976) [2024-09-13 13:02:19.225571] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14047512986, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:19.225654] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.225671] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.225684] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739225683, replica_locations:[]}) [2024-09-13 13:02:19.225734] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=42000, remain_us=1033996, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.228495] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=12] gc stale ls task succ [2024-09-13 13:02:19.232146] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=10] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:19.236212] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:19.236231] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:19.236238] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:19.236245] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:19.248090] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:19.248108] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.248136] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739248084) [2024-09-13 13:02:19.248148] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739148067, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.248166] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.248174] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.248180] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739248156) [2024-09-13 13:02:19.248161] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C82-0-0] [lt=20][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203739248120}) [2024-09-13 13:02:19.248400] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A51-0-0] [lt=24][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739247917) [2024-09-13 13:02:19.248432] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:19.248421] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A51-0-0] [lt=14][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739247917}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.248459] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:19.248478] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.248488] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.248495] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739248472) [2024-09-13 13:02:19.261776] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:19.267993] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.268373] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.268396] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.268414] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739268413, replica_locations:[]}) [2024-09-13 13:02:19.268446] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.268475] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.268482] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.268521] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.268580] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547145694, cache_obj->added_lc()=false, cache_obj->get_object_id()=43, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.269645] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.269959] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.269978] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.269991] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739269990, replica_locations:[]}) [2024-09-13 13:02:19.270060] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=989671, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.313321] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.313705] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:19.313741] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.313761] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.313779] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739313778, replica_locations:[]}) [2024-09-13 13:02:19.313800] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.313831] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.313900] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=67][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.313934] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.314012] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547191125, cache_obj->added_lc()=false, cache_obj->get_object_id()=44, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.315846] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.316165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.316184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.316191] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.316200] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.316210] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739316209, replica_locations:[]}) [2024-09-13 13:02:19.316225] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.316222] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=29][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:28, tid:19877}, {errcode:-4721, dropped:1219, tid:19944}]) [2024-09-13 13:02:19.316239] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.316249] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.316267] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:19.316279] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:19.316291] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:19.316305] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:19.316315] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.316320] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.316329] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:19.316333] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:19.316337] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:19.316343] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:19.316352] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:19.316356] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:19.316363] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:19.316369] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:19.316379] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:19.316386] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:19.316400] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:19.316413] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.316424] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:19.316432] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:19.316450] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:19.316461] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=44, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.316485] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] will sleep(sleep_us=44000, remain_us=943245, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.325641] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.325663] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=21][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:19.325690] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:19.325699] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:19.325722] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=4] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:19.325733] WDIAG [STORAGE.TRANS] operator() (ob_ts_mgr.h:175) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] refresh gts failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:19.325741] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:19.332516] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=13] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:19.347974] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=16] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:19.348413] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A52-0-0] [lt=54][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739347996) [2024-09-13 13:02:19.348460] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A52-0-0] [lt=38][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739347996}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.348488] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.348511] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739348480) [2024-09-13 13:02:19.348523] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739248156, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.348548] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.348557] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.348562] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739348533) [2024-09-13 13:02:19.352104] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B35-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:19.352122] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B35-0-0] [lt=17][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203739351686], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:19.352864] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC4-0-0] [lt=20][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203739352218, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035035, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203739351804}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:19.352912] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC4-0-0] [lt=47][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.353530] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC4-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.360718] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.361070] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.361099] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.361109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.361120] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.361136] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739361135, replica_locations:[]}) [2024-09-13 13:02:19.361155] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.361180] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:44, local_retry_times:44, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:19.361200] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.361212] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.361226] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.361233] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.361237] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:19.361250] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:19.361261] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.361306] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547238422, cache_obj->added_lc()=false, cache_obj->get_object_id()=45, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.362365] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.362392] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.362516] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.363116] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.363143] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.363152] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.363170] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.363186] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739363185, replica_locations:[]}) [2024-09-13 13:02:19.363206] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.363219] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.363232] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.363248] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:19.363256] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:19.363264] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:19.363282] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:19.363297] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.363309] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.363319] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:19.363326] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:19.363331] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:19.363340] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:19.363350] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:19.363354] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:19.363359] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:19.363363] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:19.363369] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:19.363374] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:19.363386] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:19.363395] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.363406] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:19.363414] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:19.363426] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:19.363462] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=34][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=45, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.363488] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] will sleep(sleep_us=45000, remain_us=896243, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.408732] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.409123] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.409148] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.409155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.409165] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.409178] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739409177, replica_locations:[]}) [2024-09-13 13:02:19.409195] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.409214] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:45, local_retry_times:45, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:19.409231] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.409240] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.409250] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.409255] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.409259] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:19.409282] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:19.409293] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.409348] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547286465, cache_obj->added_lc()=false, cache_obj->get_object_id()=46, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.410315] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=31][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.410345] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.410455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.410734] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.410751] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.410756] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.410764] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.410776] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739410775, replica_locations:[]}) [2024-09-13 13:02:19.410789] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.410797] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.410804] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.410813] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:19.410820] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:19.410825] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:19.410839] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:19.410847] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.410858] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.410863] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:19.410868] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:19.410872] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:19.410890] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:19.410898] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:19.410906] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:19.410910] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:19.410916] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:19.410921] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:19.410928] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:19.410939] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:19.410947] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.410955] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:19.410962] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:19.410969] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:19.410976] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=46, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.410993] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] will sleep(sleep_us=46000, remain_us=848737, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.414751] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690056-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.425949] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14047512986, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:19.448480] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A53-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739448070) [2024-09-13 13:02:19.448525] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A53-0-0] [lt=35][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739448070}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.448556] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.448580] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739448547) [2024-09-13 13:02:19.448598] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739348531, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.448631] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.448642] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.448650] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739448615) [2024-09-13 13:02:19.452018] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E5-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.452514] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E5-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.452782] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E5-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.453207] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E5-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.453469] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E5-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.453891] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E5-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.457025] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=10, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:19.457190] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.458143] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=22][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:19.458226] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.458248] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.458258] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.458275] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.458291] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739458290, replica_locations:[]}) [2024-09-13 13:02:19.458306] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.458325] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:46, local_retry_times:46, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:19.458343] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.458351] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.458361] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.458367] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.458371] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:19.458383] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:19.458393] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.458447] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547335554, cache_obj->added_lc()=false, cache_obj->get_object_id()=47, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.459386] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.459423] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.459558] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.459778] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.459798] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.459805] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.459812] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.459823] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739459822, replica_locations:[]}) [2024-09-13 13:02:19.459837] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.459844] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.459850] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.459860] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:19.459866] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:19.459870] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:19.459893] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:19.459908] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.459914] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.459922] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:19.459926] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:19.459933] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:19.459940] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:19.459950] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:19.459954] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:19.459959] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:19.459964] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:19.459970] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:19.459975] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:19.459988] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:19.460000] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.460011] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:19.460019] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:19.460028] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:19.460033] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=47, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.460051] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] will sleep(sleep_us=47000, remain_us=799679, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.461865] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:19.507329] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.507577] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.507610] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.507621] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.507638] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.507659] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739507657, replica_locations:[]}) [2024-09-13 13:02:19.507682] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.507708] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:47, local_retry_times:47, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:19.507731] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.507743] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.507759] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.507770] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.507777] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:19.507811] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:19.507826] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.507895] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547384997, cache_obj->added_lc()=false, cache_obj->get_object_id()=48, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.509113] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.509142] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.509402] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.509944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.509968] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.509977] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.509986] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.510003] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739510002, replica_locations:[]}) [2024-09-13 13:02:19.510022] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.510036] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.510049] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.510067] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:19.510078] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:19.510090] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:19.510109] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:19.510123] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.510133] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.510140] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:19.510149] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:19.510157] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:19.510165] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:19.510178] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:19.510188] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:19.510198] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:19.510209] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:19.510219] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:19.510229] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:19.510243] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:19.510255] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.510266] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:19.510277] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:19.510288] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:19.510299] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=48, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.510322] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] will sleep(sleep_us=48000, remain_us=749409, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.548627] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.548616] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A54-0-0] [lt=8][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739548146) [2024-09-13 13:02:19.548662] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739548619) [2024-09-13 13:02:19.548676] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739448612, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.548701] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.548719] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.548726] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739548687) [2024-09-13 13:02:19.548746] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.548731] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A54-0-0] [lt=39][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739548146}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.548755] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.548761] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739548742) [2024-09-13 13:02:19.558625] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.558968] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.559009] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.559027] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.559043] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.559069] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739559067, replica_locations:[]}) [2024-09-13 13:02:19.559096] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.559137] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=33][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:48, local_retry_times:48, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:19.559160] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.559174] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.559191] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.559200] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.559229] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:19.559253] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:19.559283] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.559347] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547436460, cache_obj->added_lc()=false, cache_obj->get_object_id()=49, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.560967] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.561009] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=40][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.561183] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.561374] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.561395] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.561405] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.561417] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.561432] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739561431, replica_locations:[]}) [2024-09-13 13:02:19.561466] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=31][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.561490] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.561501] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.561515] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:19.561524] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:19.561534] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:19.561549] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:19.561561] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.561570] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:19.561580] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:19.561587] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:19.561595] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:19.561606] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:19.561616] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:19.561625] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:19.561640] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:19.561648] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:19.561657] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:19.561665] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:19.561681] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:19.561692] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:19.561701] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:19.561710] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:19.561719] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:19.561728] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=49, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:19.561749] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] will sleep(sleep_us=49000, remain_us=697981, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.611122] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.611419] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.611586] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=165][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.611604] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.611624] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.611656] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739611655, replica_locations:[]}) [2024-09-13 13:02:19.611682] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.611712] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=22][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:49, local_retry_times:49, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:19.611739] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.611753] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.611772] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.611785] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:19.611798] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:19.611831] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:19.611849] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.611911] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547489027, cache_obj->added_lc()=false, cache_obj->get_object_id()=50, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.613302] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.613503] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=131][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.613691] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.613926] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.613965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.613982] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.614001] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.614022] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739614020, replica_locations:[]}) [2024-09-13 13:02:19.614047] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.614065] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:19.614081] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:19.614152] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=50000, remain_us=645579, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.614921] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=41] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:19.626296] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=40] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14043318682, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:19.648720] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A55-0-0] [lt=29][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739648213) [2024-09-13 13:02:19.648798] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A55-0-0] [lt=41][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739648213}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.648810] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.648834] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739648802) [2024-09-13 13:02:19.648850] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739548687, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.648881] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.648891] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.648896] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739648859) [2024-09-13 13:02:19.661973] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=28] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:19.663311] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=18][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:19.663345] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=24][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=994210) [2024-09-13 13:02:19.663355] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=7][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:19.663363] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=6][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:19.663373] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED978DEF-0-0] [lt=8][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:19.664408] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.664729] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.664749] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.664756] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.664765] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.664780] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739664779, replica_locations:[]}) [2024-09-13 13:02:19.664796] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.664822] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.664831] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.664851] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.664905] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547542022, cache_obj->added_lc()=false, cache_obj->get_object_id()=51, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.666178] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.666335] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.666356] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.666367] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.666377] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.666386] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739666385, replica_locations:[]}) [2024-09-13 13:02:19.666460] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=593270, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.669040] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=37][errcode=0] server is initiating(server_id=0, local_seq=11, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:19.669900] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:19.691727] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=12, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:19.692633] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=30][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:19.717705] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.718014] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.718039] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.718046] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.718055] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.718068] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739718067, replica_locations:[]}) [2024-09-13 13:02:19.718084] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.718109] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.718122] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.718161] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.718209] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547595325, cache_obj->added_lc()=false, cache_obj->get_object_id()=52, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.719320] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.719495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.719514] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.719523] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.719534] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.719546] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739719546, replica_locations:[]}) [2024-09-13 13:02:19.719602] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=540128, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.725341] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=14] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:19.725376] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=17] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=8318976) [2024-09-13 13:02:19.748743] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A56-0-0] [lt=5][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739748296) [2024-09-13 13:02:19.748788] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A56-0-0] [lt=41][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739748296}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.748858] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.748870] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.748887] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739748843) [2024-09-13 13:02:19.771869] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.772187] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.772214] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.772225] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.772238] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.772275] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739772273, replica_locations:[]}) [2024-09-13 13:02:19.772296] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.772328] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.772340] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.772369] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.772429] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547649541, cache_obj->added_lc()=false, cache_obj->get_object_id()=53, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.773583] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.773765] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.773782] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.773789] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.773799] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.773808] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739773807, replica_locations:[]}) [2024-09-13 13:02:19.773886] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=485857, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.826196] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:19.826253] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:19.826669] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=38] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14045415834, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:19.827187] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.827471] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.827492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.827502] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.827514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.827528] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739827527, replica_locations:[]}) [2024-09-13 13:02:19.827543] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.827587] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.827597] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.827634] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.827683] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547704799, cache_obj->added_lc()=false, cache_obj->get_object_id()=54, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.829018] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.829238] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.829257] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.829263] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.829274] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.829287] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739829286, replica_locations:[]}) [2024-09-13 13:02:19.829341] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=54000, remain_us=430389, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.832052] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=18] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5364, clean_start_pos=125829, clean_num=125829) [2024-09-13 13:02:19.848923] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:19.848979] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=40][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739848914) [2024-09-13 13:02:19.848990] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739648856, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:19.848980] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A57-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739848372) [2024-09-13 13:02:19.849011] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.849017] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.849004] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A57-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739848372}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.849027] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739848997) [2024-09-13 13:02:19.849038] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.849041] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.849045] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739849035) [2024-09-13 13:02:19.852638] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B36-0-0] [lt=8] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:19.852660] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B36-0-0] [lt=21][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203739852155], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:19.853216] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC5-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.853835] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC5-0-0] [lt=20][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203739853508, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035070, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203739852753}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:19.853889] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC5-0-0] [lt=53][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:19.862074] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=26] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:19.872896] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=20] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.873214] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.873251] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=9] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:19.883634] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.884645] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.884675] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.884685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.884697] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.884715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739884713, replica_locations:[]}) [2024-09-13 13:02:19.884737] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.884772] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.884785] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.884812] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.884873] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547761986, cache_obj->added_lc()=false, cache_obj->get_object_id()=55, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.886007] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.886239] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.886262] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.886271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.886284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.886302] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739886301, replica_locations:[]}) [2024-09-13 13:02:19.886387] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=55000, remain_us=373344, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.941663] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.941986] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.942012] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.942019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.942029] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.942044] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739942042, replica_locations:[]}) [2024-09-13 13:02:19.942060] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:19.942083] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:19.942092] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:19.942133] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:19.942182] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547819297, cache_obj->added_lc()=false, cache_obj->get_object_id()=56, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:19.943286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:19.943529] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.943557] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:19.943564] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:19.943575] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:19.943586] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203739943585, replica_locations:[]}) [2024-09-13 13:02:19.943650] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=316081, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:19.948923] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A58-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203739948459) [2024-09-13 13:02:19.948958] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A58-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203739948459}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:19.948993] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.949009] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:19.949018] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203739948977) [2024-09-13 13:02:19.999968] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.000335] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.000370] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.000380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.000394] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.000413] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740000412, replica_locations:[]}) [2024-09-13 13:02:20.000447] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.000481] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.000493] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.000522] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.000581] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547877694, cache_obj->added_lc()=false, cache_obj->get_object_id()=57, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.002041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.002469] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.002500] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.002510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.002523] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.002542] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740002540, replica_locations:[]}) [2024-09-13 13:02:20.002680] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=257051, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:20.032527] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14045415834, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:20.049010] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A59-0-0] [lt=56][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740048545) [2024-09-13 13:02:20.049046] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A59-0-0] [lt=34][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740048545}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.049053] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.049079] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740049044) [2024-09-13 13:02:20.049094] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203739848997, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.049125] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.049134] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.049148] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740049107) [2024-09-13 13:02:20.049163] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.049169] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.049172] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740049159) [2024-09-13 13:02:20.056892] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=19] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:20.060007] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.060343] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.060369] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.060376] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.060389] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.060409] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740060408, replica_locations:[]}) [2024-09-13 13:02:20.060430] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.060471] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.060480] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.060527] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.060584] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547937700, cache_obj->added_lc()=false, cache_obj->get_object_id()=58, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.061812] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.062043] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.062067] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.062077] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.062092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.062109] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740062108, replica_locations:[]}) [2024-09-13 13:02:20.062160] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:20.062174] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=197556, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:20.093957] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.093993] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=9] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.094004] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.094014] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.094835] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=17] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.094945] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=9] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.094966] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=5] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.095366] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.095647] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.116682] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=24] PNIO [ratelimit] time: 1726203740116678, bytes: 2312934, bw: 0.059461 MB/s, add_ts: 1007619, add_bytes: 62824 [2024-09-13 13:02:20.118204] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=15] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:20.120460] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.120967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.121008] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.121021] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.121035] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.121053] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740121052, replica_locations:[]}) [2024-09-13 13:02:20.121071] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.121097] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.121108] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.121133] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.121182] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6547998299, cache_obj->added_lc()=false, cache_obj->get_object_id()=59, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.122222] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.122407] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.122429] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.122461] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=31] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.122472] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.122487] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740122485, replica_locations:[]}) [2024-09-13 13:02:20.122541] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=137189, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:20.128735] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC75-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:20.143982] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=25] PNIO [ratelimit] time: 1726203740143978, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007622, add_bytes: 0 [2024-09-13 13:02:20.144067] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=11] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:20.149092] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5A-0-0] [lt=30][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740148621) [2024-09-13 13:02:20.149157] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5A-0-0] [lt=56][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740148621}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.149183] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.149203] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740149176) [2024-09-13 13:02:20.149212] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740049104, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.149236] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.149248] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.149254] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740149222) [2024-09-13 13:02:20.181798] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.182133] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.182163] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.182174] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.182188] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.182224] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=29] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740182223, replica_locations:[]}) [2024-09-13 13:02:20.182242] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.182270] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.182280] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.182305] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.182361] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548059477, cache_obj->added_lc()=false, cache_obj->get_object_id()=60, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.183536] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.183748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.183768] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.183779] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.183805] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.183818] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740183817, replica_locations:[]}) [2024-09-13 13:02:20.183872] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=60000, remain_us=75859, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:20.184840] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782DA-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.205477] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=13] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:20.217032] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.217443] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.218401] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] sock regist: 0x2b07b3e20740 fd=128 [2024-09-13 13:02:20.218419] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=14] [ussl] accept new connection, fd:128, src_addr:172.16.51.36:53884 [2024-09-13 13:02:20.223996] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.225420] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:20.225459] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=8318976) [2024-09-13 13:02:20.227034] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] ====== check clog disk timer task ====== [2024-09-13 13:02:20.227059] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:20.227073] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=9] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:20.228563] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=16] gc stale ls task succ [2024-09-13 13:02:20.229981] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] auth mothod is NONE, the fd will be dispatched, fd:128, src_addr:172.16.51.36:53884 [2024-09-13 13:02:20.229997] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=14] PNIO dispatch fd to certain group, fd:128, gid:0x100000002 [2024-09-13 13:02:20.230058] INFO pkts_sk_init (pkts_sk_factory.h:23) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=11] PNIO set pkts_sk_t sock_id s=0x2b07b0be8a98, s->id=65534 [2024-09-13 13:02:20.230072] INFO pkts_sk_new (pkts_sk_factory.h:51) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=14] PNIO sk_new: s=0x2b07b0be8a98 [2024-09-13 13:02:20.230088] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sock regist: 0x2b07b0be8a98 fd=128 [2024-09-13 13:02:20.230102] INFO on_accept (listenfd.c:39) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO accept new connection, ns=0x2b07b0be8a98, fd=fd:128:local:"172.16.51.36:53884":remote:"172.16.51.36:53884" [2024-09-13 13:02:20.230104] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.230117] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.230122] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.230152] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.230170] WDIAG listenfd_handle_event (listenfd.c:71) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=5][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:20.230183] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=13, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:20.230198] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.230553] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.230857] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.231164] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:20.231185] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=19][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:20.231193] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:20.231203] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:20.231210] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:20.231217] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:20.231238] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=19][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:20.231243] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:20.231247] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:20.231251] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:20.231256] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:20.231261] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:20.231266] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:20.231270] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:20.231284] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:20.231291] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.231297] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.231304] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:20.231309] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:20.231322] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=12][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:20.231327] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.231341] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:20.231357] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:20.231362] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:20.231366] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:20.231376] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=3][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:20.231385] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.231391] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:20.231398] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:20.231403] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:20.231420] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=15][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:20.231425] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203740231017, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:20.231444] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=18][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:20.231449] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:20.231504] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:20.231513] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=8][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:20.231518] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:20.231524] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=5][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:20.231530] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:20.231539] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=7][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:20.231543] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7E-0-0] [lt=4][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:20.232235] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=19] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:20.232920] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=46] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14045415834, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:20.236383] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:20.236401] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:20.236408] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:20.236415] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:20.244163] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.244469] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.244493] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.244499] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.244510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.244526] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740244525, replica_locations:[]}) [2024-09-13 13:02:20.244542] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.244566] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.244575] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.244605] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.244653] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548121769, cache_obj->added_lc()=false, cache_obj->get_object_id()=61, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.245695] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.245951] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.245977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.245988] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.246000] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.246014] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740246013, replica_locations:[]}) [2024-09-13 13:02:20.246083] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1] will sleep(sleep_us=13648, remain_us=13648, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203740259730) [2024-09-13 13:02:20.249140] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5B-0-0] [lt=42][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740248684) [2024-09-13 13:02:20.249178] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5B-0-0] [lt=31][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740248684}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.249198] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:20.249220] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:20.249251] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.249269] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.249281] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740249239) [2024-09-13 13:02:20.259845] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203740259732, ctx_timeout_ts=1726203740259732, worker_timeout_ts=1726203740259730, default_timeout=1000000) [2024-09-13 13:02:20.259888] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=28][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:20.259895] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:20.259906] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.259921] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=12][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:20.259940] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.259950] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.259972] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.260022] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548137138, cache_obj->added_lc()=false, cache_obj->get_object_id()=62, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.260956] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203740259730, ctx_timeout_ts=1726203740259730, worker_timeout_ts=1726203740259730, default_timeout=1000000) [2024-09-13 13:02:20.260975] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=18][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:20.260981] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:20.261027] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=45][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.261037] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.261051] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=15][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:20.261085] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:20.261099] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.261107] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.261129] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=7] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:20.261144] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:20.261150] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C87-0-0] [lt=30][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:true, add_timestamp:1726203740261118}) [2024-09-13 13:02:20.261161] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=4][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:20.261170] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.261179] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=4] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1999537) [2024-09-13 13:02:20.261184] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:20.261194] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=7][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.261202] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:20.261208] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=6][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:20.261218] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:20.261234] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=10][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.261308] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548138385, cache_obj->added_lc()=false, cache_obj->get_object_id()=63, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.261362] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:20.261372] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=8][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:20.261377] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:20.261383] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=5][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:20.261399] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=14][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:20.261411] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:20.261431] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=19] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001712) [2024-09-13 13:02:20.261452] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=19][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:20.261460] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=7] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001754) [2024-09-13 13:02:20.261470] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=9][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:20.261478] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=7] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:20.261483] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7D-0-0] [lt=3][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:20.261494] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:20.261501] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:20.261528] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=5] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:20.261536] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=6] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:20.262268] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:20.262754] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.263046] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.263074] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.263088] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.263104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.263117] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.263131] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:20.263145] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:20.263153] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:20.263255] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.263336] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.263472] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.263488] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.263501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.263514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.263530] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740263529, replica_locations:[]}) [2024-09-13 13:02:20.263551] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:20.263570] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:20.263802] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.263809] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:20.263820] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.263823] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4638] [2024-09-13 13:02:20.263832] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.263847] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.263862] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740263861, replica_locations:[]}) [2024-09-13 13:02:20.263930] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.263941] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1997605, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.264030] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.264186] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264203] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264212] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.264225] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.264237] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.264251] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:20.264250] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264262] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:20.264262] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264273] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:20.264272] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.264282] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.264298] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740264297, replica_locations:[]}) [2024-09-13 13:02:20.264316] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.264339] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.264343] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.264351] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.264372] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.264413] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548141529, cache_obj->added_lc()=false, cache_obj->get_object_id()=64, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.264640] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264660] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264669] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.264681] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.264690] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.264702] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:20.264711] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:20.264717] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:20.264790] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.264965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264980] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.264992] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.265004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.265015] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.265028] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:20.265038] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:20.265048] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:20.265056] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:20.265075] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=18][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:20.265086] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:20.265425] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.265613] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.265631] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.265640] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.265661] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.265672] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740265671, replica_locations:[]}) [2024-09-13 13:02:20.265710] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1995836, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.266902] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.267101] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.267116] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.267122] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.267131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.267140] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740267139, replica_locations:[]}) [2024-09-13 13:02:20.267152] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.267171] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.267179] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.267220] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.267251] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548144369, cache_obj->added_lc()=false, cache_obj->get_object_id()=65, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.268091] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.268315] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.268340] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.268348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.268356] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.268364] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740268363, replica_locations:[]}) [2024-09-13 13:02:20.268424] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993121, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.270981] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.272128] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.272152] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.272159] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.272171] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.272187] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740272186, replica_locations:[]}) [2024-09-13 13:02:20.272203] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.272228] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.272237] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.272260] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.272302] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548149418, cache_obj->added_lc()=false, cache_obj->get_object_id()=66, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.273433] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.273657] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.273685] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.273693] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.273702] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.273722] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740273721, replica_locations:[]}) [2024-09-13 13:02:20.273785] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=3000, remain_us=1987760, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.276344] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=14] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.277048] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.277297] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.277319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.277331] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.277343] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.277359] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740277358, replica_locations:[]}) [2024-09-13 13:02:20.277380] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.277411] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.277423] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.277496] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.277557] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548154668, cache_obj->added_lc()=false, cache_obj->get_object_id()=67, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.278912] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.279129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.279154] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.279164] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.279179] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.279196] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740279194, replica_locations:[]}) [2024-09-13 13:02:20.279264] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=4000, remain_us=1982282, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.281002] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=14][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.283573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.283832] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.283857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.283867] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.283903] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=34] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.283918] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740283917, replica_locations:[]}) [2024-09-13 13:02:20.283938] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.283965] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.283976] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.284003] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.284056] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548161169, cache_obj->added_lc()=false, cache_obj->get_object_id()=68, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.285311] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.285547] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.285569] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.285583] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.285597] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.285612] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740285612, replica_locations:[]}) [2024-09-13 13:02:20.285682] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=5000, remain_us=1975863, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.290934] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.291237] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.291259] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.291269] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.291278] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.291292] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740291291, replica_locations:[]}) [2024-09-13 13:02:20.291305] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.291325] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.291334] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.291377] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.291423] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548168537, cache_obj->added_lc()=false, cache_obj->get_object_id()=69, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.292492] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.292758] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.292781] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.292787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.292799] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.292808] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740292807, replica_locations:[]}) [2024-09-13 13:02:20.292858] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=6000, remain_us=1968687, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.299123] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.299424] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.299456] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.299463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.299471] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.299482] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740299481, replica_locations:[]}) [2024-09-13 13:02:20.299494] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.299522] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.299532] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.299553] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.299597] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548176714, cache_obj->added_lc()=false, cache_obj->get_object_id()=70, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.300615] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.300857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.300884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.300893] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.300904] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.300913] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740300912, replica_locations:[]}) [2024-09-13 13:02:20.300968] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1960578, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.302863] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.308160] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.308467] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.308494] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.308508] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.308524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.308543] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740308542, replica_locations:[]}) [2024-09-13 13:02:20.308564] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.308595] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.308607] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.308652] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.308732] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=33][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548185845, cache_obj->added_lc()=false, cache_obj->get_object_id()=71, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.310038] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.310253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.310278] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.310293] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.310307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.310322] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740310322, replica_locations:[]}) [2024-09-13 13:02:20.310383] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1951162, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.317577] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=15][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:695, tid:19944}]) [2024-09-13 13:02:20.318701] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.318931] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.318957] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.318968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.318984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.319001] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740318999, replica_locations:[]}) [2024-09-13 13:02:20.319022] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.319046] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:8, local_retry_times:8, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.319069] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.319080] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.319096] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.319106] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.319116] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:20.319135] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:20.319150] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.319237] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548196350, cache_obj->added_lc()=false, cache_obj->get_object_id()=72, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.320428] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.320475] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=46][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.320630] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.320896] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.320917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.320931] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.320945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.320974] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740320973, replica_locations:[]}) [2024-09-13 13:02:20.320992] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.321006] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.321019] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.321036] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:20.321047] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:20.321059] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:20.321078] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:20.321093] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.321105] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.321116] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:20.321126] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:20.321135] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:20.321146] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.321158] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:20.321169] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:20.321179] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:20.321189] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:20.321200] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:20.321210] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:20.321227] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:20.321239] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.321251] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:20.321262] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:20.321273] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:20.321284] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=9, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.321308] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] will sleep(sleep_us=9000, remain_us=1940238, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.326676] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.326726] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=49][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:20.326757] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:20.326771] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:20.326786] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:20.326801] WDIAG [STORAGE.TRANS] operator() (ob_ts_mgr.h:175) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4721] refresh gts failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:20.326814] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:20.330568] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.330858] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.330896] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.330910] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.330926] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.330945] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740330944, replica_locations:[]}) [2024-09-13 13:02:20.330967] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.331010] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=37][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:9, local_retry_times:9, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.331033] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.331046] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.331062] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.331073] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.331083] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:20.331119] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:20.331134] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.331191] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548208305, cache_obj->added_lc()=false, cache_obj->get_object_id()=73, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.332488] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.332520] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=31][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.332628] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.332862] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.332908] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=45][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.332922] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.332938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.332955] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740332954, replica_locations:[]}) [2024-09-13 13:02:20.332975] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.332990] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.333018] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.333035] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:20.333046] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:20.333058] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:20.333077] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:20.333086] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.333097] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.333109] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:20.333134] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:20.333144] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:20.333155] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.333167] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:20.333177] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:20.333188] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:20.333198] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:20.333208] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:20.333219] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:20.333236] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:20.333249] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.333261] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:20.333271] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:20.333283] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:20.333294] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=10, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.333318] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] will sleep(sleep_us=10000, remain_us=1928228, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.343574] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.344015] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.344048] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.344059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.344071] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.344092] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740344090, replica_locations:[]}) [2024-09-13 13:02:20.344114] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.344139] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:10, local_retry_times:10, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.344161] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.344173] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.344188] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.344199] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.344205] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:20.344222] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:20.344235] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.344292] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548221405, cache_obj->added_lc()=false, cache_obj->get_object_id()=74, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.345524] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.345561] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.345699] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.345977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.346001] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.346011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.346026] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.346042] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740346041, replica_locations:[]}) [2024-09-13 13:02:20.346061] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.346075] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.346088] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.346106] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:20.346117] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:20.346129] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:20.346149] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:20.346163] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.346224] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.346244] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:20.346255] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:20.346265] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:20.346277] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.346291] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:20.346301] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:20.346309] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:20.346319] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:20.346329] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:20.346340] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:20.346358] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:20.346370] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.346382] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:20.346392] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:20.346404] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:20.346415] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=11, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.346481] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=55] will sleep(sleep_us=11000, remain_us=1915065, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.348067] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=25] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:20.349218] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5C-0-0] [lt=19][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740348760) [2024-09-13 13:02:20.349246] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5C-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740348760}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.349272] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:20.349292] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.349326] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740349264) [2024-09-13 13:02:20.349342] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740149218, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.349371] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.349386] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.349393] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740349356) [2024-09-13 13:02:20.353168] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B37-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:20.353188] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B37-0-0] [lt=19][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203740352573], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:20.353655] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC6-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:20.354321] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC6-0-0] [lt=18][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203740353989, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035086, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203740353733}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:20.354374] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC6-0-0] [lt=52][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:20.357729] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.358025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.358052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.358065] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.358079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.358099] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740358097, replica_locations:[]}) [2024-09-13 13:02:20.358120] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.358146] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:11, local_retry_times:11, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.358168] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.358181] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.358196] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.358206] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.358216] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:20.358236] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:20.358251] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.358309] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548235422, cache_obj->added_lc()=false, cache_obj->get_object_id()=75, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.359653] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.359689] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=35][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.359832] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.360050] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.360074] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.360087] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.360101] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.360117] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740360116, replica_locations:[]}) [2024-09-13 13:02:20.360135] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.360149] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.360158] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.360175] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:20.360185] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:20.360197] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:20.360215] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:20.360226] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.360236] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.360248] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:20.360257] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:20.360266] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:20.360278] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.360287] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:20.360298] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:20.360307] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:20.360313] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:20.360320] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:20.360331] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:20.360348] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:20.360360] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.360395] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=34][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:20.360406] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:20.360418] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:20.360428] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=12, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.360471] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=32] will sleep(sleep_us=12000, remain_us=1901075, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.372710] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.373979] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.374005] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.374013] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.374026] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.374042] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740374040, replica_locations:[]}) [2024-09-13 13:02:20.374054] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.374074] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:12, local_retry_times:12, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.374093] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.374118] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.374130] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.374138] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.374142] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:20.374172] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:20.374184] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.374231] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548251348, cache_obj->added_lc()=false, cache_obj->get_object_id()=76, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.376011] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.376039] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.376201] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.376385] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.376412] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.376426] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.376456] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.376476] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740376474, replica_locations:[]}) [2024-09-13 13:02:20.376497] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.376512] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.376526] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.376543] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:20.376552] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:20.376563] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:20.376583] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:20.376597] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.376605] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.376615] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:20.376623] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:20.376630] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:20.376643] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.376652] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:20.376657] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:20.376662] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:20.376666] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:20.376671] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:20.376675] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:20.376693] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:20.376706] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.376715] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:20.376723] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:20.376735] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:20.376743] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=13, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.376766] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] will sleep(sleep_us=13000, remain_us=1884780, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.382309] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] init point thread id with(&point=0x55a3873cc000, idx_=3758, point=[thread id=20142, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20142) [2024-09-13 13:02:20.382338] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=27] ====== tenant freeze timer task ====== [2024-09-13 13:02:20.382398] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=28][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:20.382423] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:20.382444] INFO [STORAGE] check_and_freeze_tx_data_ (ob_tenant_freezer.cpp:573) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=19] TxData Memory Statistic : (Tenant Total Memory(MB)=3072, Tenant Frozen TxData Memory(MB)=0, Tenant Active TxData Memory(MB)=0, Freeze TxData Trigger Memory(MB)=61, Total TxDataTable Hold Memory(MB)=0, Total TxDataTable Memory Limit(MB)=614) [2024-09-13 13:02:20.390085] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.390340] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.390364] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.390372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.390383] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.390400] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740390399, replica_locations:[]}) [2024-09-13 13:02:20.390415] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.390432] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:13, local_retry_times:13, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.390458] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.390467] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.390478] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.390486] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.390494] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:20.390507] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:20.390518] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.390564] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548267680, cache_obj->added_lc()=false, cache_obj->get_object_id()=77, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.391466] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.391496] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=29][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.391602] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.391845] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.391860] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.391866] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.391882] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.391892] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740391891, replica_locations:[]}) [2024-09-13 13:02:20.391903] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.391909] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:20.391915] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:20.391927] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:20.391933] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:20.391938] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:20.391951] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:20.391962] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.391968] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:20.391974] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:20.391980] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:20.391984] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:20.391993] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:20.392003] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:20.392010] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:20.392015] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:20.392021] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:20.392026] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:20.392033] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:20.392046] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:20.392054] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:20.392062] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:20.392070] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:20.392078] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:20.392086] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=14, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:20.392107] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] will sleep(sleep_us=14000, remain_us=1869439, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.406394] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.406865] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.406905] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.406916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.406926] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.406940] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740406938, replica_locations:[]}) [2024-09-13 13:02:20.406952] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.406971] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:14, local_retry_times:14, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:20.406993] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.407010] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.407022] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.407031] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:20.407036] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:20.407077] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.407137] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548284254, cache_obj->added_lc()=false, cache_obj->get_object_id()=78, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.408241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.408581] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.408612] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.408622] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.408636] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.408655] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740408653, replica_locations:[]}) [2024-09-13 13:02:20.408722] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1852824, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.417370] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.423984] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.424269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.424292] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.424299] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.424311] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.424325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740424324, replica_locations:[]}) [2024-09-13 13:02:20.424341] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.424366] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.424375] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.424398] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.424457] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548301574, cache_obj->added_lc()=false, cache_obj->get_object_id()=79, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.425521] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.425727] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.425750] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.425757] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.425765] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.425773] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740425773, replica_locations:[]}) [2024-09-13 13:02:20.425825] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1835720, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.433301] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14045415834, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:20.442079] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.442344] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.442369] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.442378] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.442389] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.442409] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740442408, replica_locations:[]}) [2024-09-13 13:02:20.442430] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.442476] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.442488] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.442535] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.442593] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548319706, cache_obj->added_lc()=false, cache_obj->get_object_id()=80, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.443720] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.443916] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.443937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.443947] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.443958] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.443975] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740443974, replica_locations:[]}) [2024-09-13 13:02:20.444040] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=17000, remain_us=1817505, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.449256] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5D-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740448824) [2024-09-13 13:02:20.449337] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.449300] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5D-0-0] [lt=33][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740448824}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.449353] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.449362] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740449323) [2024-09-13 13:02:20.453412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.453890] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=46][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.454896] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] sock regist: 0x2b07b3e211a0 fd=131 [2024-09-13 13:02:20.454919] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=20] [ussl] accept new connection, fd:131, src_addr:172.16.51.37:59358 [2024-09-13 13:02:20.454946] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] auth mothod is NONE, the fd will be dispatched, fd:131, src_addr:172.16.51.37:59358 [2024-09-13 13:02:20.454957] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=9] PNIO dispatch fd to certain group, fd:131, gid:0x100000002 [2024-09-13 13:02:20.455011] INFO pkts_sk_init (pkts_sk_factory.h:23) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=11] PNIO set pkts_sk_t sock_id s=0x2b07b0be94a8, s->id=65533 [2024-09-13 13:02:20.455027] INFO pkts_sk_new (pkts_sk_factory.h:51) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=17] PNIO sk_new: s=0x2b07b0be94a8 [2024-09-13 13:02:20.455040] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] PNIO sock regist: 0x2b07b0be94a8 fd=131 [2024-09-13 13:02:20.455050] INFO on_accept (listenfd.c:39) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO accept new connection, ns=0x2b07b0be94a8, fd=fd:131:local:"172.16.51.37:59358":remote:"172.16.51.37:59358" [2024-09-13 13:02:20.455110] WDIAG listenfd_handle_event (listenfd.c:71) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:20.455132] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.459925] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.460250] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.461303] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.461547] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.461572] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.461584] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.461611] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.461629] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740461628, replica_locations:[]}) [2024-09-13 13:02:20.461646] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.461673] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.461684] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.461708] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.461755] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548338872, cache_obj->added_lc()=false, cache_obj->get_object_id()=81, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.462979] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.463206] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.463232] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.463248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.463265] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.463283] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740463282, replica_locations:[]}) [2024-09-13 13:02:20.463366] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1798180, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.465476] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] Cache replace map node details(ret=0, replace_node_count=0, replace_time=3108, replace_start_pos=125828, replace_num=62914) [2024-09-13 13:02:20.465499] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:20.469176] INFO [LIB] log_compress_loop_ (ob_log_compressor.cpp:393) [19885][SyslogCompress][T0][Y0-0000000000000000-0-0] [lt=9] log compressor cycles once. (ret=0, cost_time=1064, compressed_file_count=0, deleted_file_count=0, disk_remaining_size=182295486464) [2024-09-13 13:02:20.481634] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.481925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.481958] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.481974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.481991] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.482014] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740482012, replica_locations:[]}) [2024-09-13 13:02:20.482032] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.482069] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.482080] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.482120] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.482179] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548359294, cache_obj->added_lc()=false, cache_obj->get_object_id()=82, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.483269] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.483491] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.483512] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.483524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.483536] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.483550] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740483549, replica_locations:[]}) [2024-09-13 13:02:20.483606] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1777939, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.502988] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.503166] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.503192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.503203] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.503217] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.503234] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740503233, replica_locations:[]}) [2024-09-13 13:02:20.503262] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.503288] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.503300] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.503330] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.503388] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548380504, cache_obj->added_lc()=false, cache_obj->get_object_id()=83, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.504505] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=38][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.504732] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.504754] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.504765] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.504777] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.504791] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740504790, replica_locations:[]}) [2024-09-13 13:02:20.504857] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=20000, remain_us=1756688, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.525155] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.525429] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.525485] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=54][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.525502] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.525519] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.525540] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740525539, replica_locations:[]}) [2024-09-13 13:02:20.525578] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=35] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.525612] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.525627] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.525673] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.525732] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548402848, cache_obj->added_lc()=false, cache_obj->get_object_id()=84, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.526959] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.527264] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.527284] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.527302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.527314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.527326] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740527326, replica_locations:[]}) [2024-09-13 13:02:20.527377] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1734168, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.548602] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.548902] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.548927] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.548939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.548951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.548966] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740548965, replica_locations:[]}) [2024-09-13 13:02:20.548981] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.549006] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.549017] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.549047] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.549090] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548426208, cache_obj->added_lc()=false, cache_obj->get_object_id()=85, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.549341] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5E-0-0] [lt=50][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740548916) [2024-09-13 13:02:20.549374] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5E-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740548916}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.549391] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.549409] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740549384) [2024-09-13 13:02:20.549422] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740349354, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.549453] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.549465] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.549476] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740549439) [2024-09-13 13:02:20.550113] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=142][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.550317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.550336] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.550346] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.550358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.550370] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740550369, replica_locations:[]}) [2024-09-13 13:02:20.550419] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1711127, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.573050] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.573331] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.573344] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.573350] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.573360] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.573374] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740573373, replica_locations:[]}) [2024-09-13 13:02:20.573414] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=37] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.573464] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.573479] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.573524] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.573587] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548450693, cache_obj->added_lc()=false, cache_obj->get_object_id()=86, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.574674] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.574988] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.575003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.575009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.575019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.575036] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740575036, replica_locations:[]}) [2024-09-13 13:02:20.575085] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1686461, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.598351] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.598713] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.598737] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.598747] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.598757] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.598773] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740598771, replica_locations:[]}) [2024-09-13 13:02:20.598794] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.598823] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.598835] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.598862] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.598931] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548476044, cache_obj->added_lc()=false, cache_obj->get_object_id()=87, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.599979] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.600253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.600271] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.600281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.600292] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.600305] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740600304, replica_locations:[]}) [2024-09-13 13:02:20.600371] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1661175, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.615690] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=41] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:20.624635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.625001] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.625020] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.625027] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.625038] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.625050] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740625049, replica_locations:[]}) [2024-09-13 13:02:20.625075] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.625098] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.625106] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.625127] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.625205] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=28][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548502318, cache_obj->added_lc()=false, cache_obj->get_object_id()=88, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.626309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.626671] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.626688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.626694] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.626702] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.626714] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740626713, replica_locations:[]}) [2024-09-13 13:02:20.626765] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1634780, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.633720] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=34] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14045415834, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:20.649416] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5F-0-0] [lt=34][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740648971) [2024-09-13 13:02:20.649457] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.649476] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740649450) [2024-09-13 13:02:20.649487] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740549432, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.649486] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A5F-0-0] [lt=68][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740648971}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.649513] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.649522] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.649529] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740649499) [2024-09-13 13:02:20.649542] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.649549] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.649555] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740649538) [2024-09-13 13:02:20.651992] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.652348] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.652365] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.652372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.652382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.652394] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740652393, replica_locations:[]}) [2024-09-13 13:02:20.652408] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.652430] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.652446] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.652475] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.652519] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548529635, cache_obj->added_lc()=false, cache_obj->get_object_id()=89, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.653531] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.653863] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.653890] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.653896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.653906] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.653919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740653918, replica_locations:[]}) [2024-09-13 13:02:20.653968] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=26000, remain_us=1607578, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.664995] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=13][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.665017] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=20][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=994127) [2024-09-13 13:02:20.665025] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=7][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:20.665031] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=5][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:20.665036] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF0-0-0] [lt=5][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:20.665151] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=14, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:20.665577] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=8] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:20.665953] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=13][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.667271] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.668102] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=20][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.668115] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=12][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=1996082) [2024-09-13 13:02:20.668123] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=7][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:20.668129] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=6][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:20.668137] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20293][T1_L0_G0][T1][YB42AC103326-00062119EC0A1172-0-0] [lt=7][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:20.680168] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.680477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.680495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.680508] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.680522] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.680536] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740680534, replica_locations:[]}) [2024-09-13 13:02:20.680550] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.680572] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.680579] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.680599] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.680640] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548557757, cache_obj->added_lc()=false, cache_obj->get_object_id()=90, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.681602] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.681832] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.681847] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.681852] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.681862] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.681871] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740681870, replica_locations:[]}) [2024-09-13 13:02:20.681930] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1579616, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.688346] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=15, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:20.689181] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:20.709122] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.709441] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.709457] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.709463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.709470] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.709484] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740709483, replica_locations:[]}) [2024-09-13 13:02:20.709498] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.709518] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.709527] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.709552] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.709593] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548586711, cache_obj->added_lc()=false, cache_obj->get_object_id()=91, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.710518] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.710723] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.710737] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.710743] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.710751] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.710759] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740710758, replica_locations:[]}) [2024-09-13 13:02:20.710804] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1550742, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.725502] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:20.725536] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=17] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=8318976) [2024-09-13 13:02:20.739013] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.739268] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.739284] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.739291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.739301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.739310] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740739309, replica_locations:[]}) [2024-09-13 13:02:20.739320] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.739339] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.739348] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.739368] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.739408] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548616525, cache_obj->added_lc()=false, cache_obj->get_object_id()=92, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.740326] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.740505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.740522] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.740534] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.740547] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.740558] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740740557, replica_locations:[]}) [2024-09-13 13:02:20.740605] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1520941, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.749526] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A60-0-0] [lt=40][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740749037) [2024-09-13 13:02:20.749561] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A60-0-0] [lt=29][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740749037}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.749583] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.749604] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740749575) [2024-09-13 13:02:20.749620] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740649497, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.749652] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.749663] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.749670] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740749638) [2024-09-13 13:02:20.769863] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.770227] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.770258] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.770268] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.770283] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.770303] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740770302, replica_locations:[]}) [2024-09-13 13:02:20.770329] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.770361] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.770373] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.770418] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.770500] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548647612, cache_obj->added_lc()=false, cache_obj->get_object_id()=93, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.771688] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=37][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.771962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.771984] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.771993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.772003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.772020] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740772020, replica_locations:[]}) [2024-09-13 13:02:20.772088] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1489458, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.802327] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.802669] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.802696] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.802706] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.802729] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.802750] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740802749, replica_locations:[]}) [2024-09-13 13:02:20.802772] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.802804] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.802816] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.802842] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.802913] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548680026, cache_obj->added_lc()=false, cache_obj->get_object_id()=94, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.804008] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=41][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.804248] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.804267] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.804274] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.804285] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.804296] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740804295, replica_locations:[]}) [2024-09-13 13:02:20.804348] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1457198, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.827461] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:20.827552] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:20.834138] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=32] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14043318682, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:20.835584] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.835938] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.835965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.835972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.835980] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.835991] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740835990, replica_locations:[]}) [2024-09-13 13:02:20.836007] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.836032] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.836041] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.836076] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.836122] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548713239, cache_obj->added_lc()=false, cache_obj->get_object_id()=95, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.837241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.837479] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.837506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.837515] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.837530] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.837545] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740837544, replica_locations:[]}) [2024-09-13 13:02:20.837616] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1423930, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.849586] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A61-0-0] [lt=35][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740849126) [2024-09-13 13:02:20.849651] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.849624] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A61-0-0] [lt=31][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203740849126}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:20.849670] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.849677] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740849637) [2024-09-13 13:02:20.853451] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B38-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:20.853477] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B38-0-0] [lt=25][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203740853046], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:20.853898] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC7-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:20.855020] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC7-0-0] [lt=20][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203740854724, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035107, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203740854562}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:20.855051] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC7-0-0] [lt=31][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:20.865664] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=13] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:20.869844] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.870153] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.870177] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.870196] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.870211] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.870230] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740870228, replica_locations:[]}) [2024-09-13 13:02:20.870250] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.870280] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.870292] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.870320] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.870377] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548747491, cache_obj->added_lc()=false, cache_obj->get_object_id()=96, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.871588] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.871835] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.871854] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.871860] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.871870] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.871886] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740871885, replica_locations:[]}) [2024-09-13 13:02:20.871935] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1389611, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.873148] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=24] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.873363] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=20] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.874205] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=9] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:20.905151] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.905418] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.905453] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.905464] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.905488] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.905503] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740905502, replica_locations:[]}) [2024-09-13 13:02:20.905525] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.905555] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.905568] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.905613] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.905670] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548782783, cache_obj->added_lc()=false, cache_obj->get_object_id()=97, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.906964] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.907254] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.907276] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.907286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.907301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.907316] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740907315, replica_locations:[]}) [2024-09-13 13:02:20.907379] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1354166, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.941580] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.941949] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.941967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.941973] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.941980] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.941992] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740941991, replica_locations:[]}) [2024-09-13 13:02:20.942006] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.942027] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.942036] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.942054] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.942097] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548819215, cache_obj->added_lc()=false, cache_obj->get_object_id()=98, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.943090] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.943784] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.943802] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.943808] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.943816] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.943824] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740943824, replica_locations:[]}) [2024-09-13 13:02:20.943871] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1317675, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:20.949699] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:20.949736] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203740949693) [2024-09-13 13:02:20.949751] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740749635, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:20.949770] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.949781] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:20.949786] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203740949758) [2024-09-13 13:02:20.979116] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.979454] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.979474] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.979481] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.979492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.979503] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740979502, replica_locations:[]}) [2024-09-13 13:02:20.979517] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:20.979540] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:20.979562] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:20.979597] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:20.979641] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548856758, cache_obj->added_lc()=false, cache_obj->get_object_id()=99, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:20.980647] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:20.980910] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.980929] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:20.980935] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:20.980942] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:20.980950] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203740980950, replica_locations:[]}) [2024-09-13 13:02:20.980998] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1280548, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.017252] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.017640] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.017663] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.017670] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.017680] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.017697] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741017696, replica_locations:[]}) [2024-09-13 13:02:21.017713] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.017737] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.017746] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.017771] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.017825] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548894942, cache_obj->added_lc()=false, cache_obj->get_object_id()=100, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.018944] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.019353] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.019372] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.019378] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.019386] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.019399] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741019398, replica_locations:[]}) [2024-09-13 13:02:21.019460] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=37000, remain_us=1242086, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.034564] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14043318682, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:21.049672] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A62-0-0] [lt=47][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741049243) [2024-09-13 13:02:21.049706] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A62-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741049243}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.049735] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.049748] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.049759] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741049718) [2024-09-13 13:02:21.056721] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.057268] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.057301] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.057309] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.057320] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.057340] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741057338, replica_locations:[]}) [2024-09-13 13:02:21.057358] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.057390] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.057400] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.057467] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.057525] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548934639, cache_obj->added_lc()=false, cache_obj->get_object_id()=101, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.058918] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.059200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.059223] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.059235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.059248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.059264] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741059263, replica_locations:[]}) [2024-09-13 13:02:21.059333] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1202213, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.065757] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=15] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:21.093426] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=25] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.093461] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=6] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.093471] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.093995] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=20] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.094565] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.094615] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=7] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.094639] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.094761] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.096013] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=29] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.097565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.097928] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.097953] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.097960] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.097973] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.097985] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741097984, replica_locations:[]}) [2024-09-13 13:02:21.098003] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.098027] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.098036] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.098057] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.098102] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6548975220, cache_obj->added_lc()=false, cache_obj->get_object_id()=102, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.099173] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.099355] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.099375] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.099381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.099388] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.099398] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741099397, replica_locations:[]}) [2024-09-13 13:02:21.099461] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1162084, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.118299] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=23] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:21.124292] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=48] PNIO [ratelimit] time: 1726203741124291, bytes: 2442551, bw: 0.122678 MB/s, add_ts: 1007613, add_bytes: 129617 [2024-09-13 13:02:21.126124] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:138) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7D-0-0] [lt=7][errcode=-4002] invalid argument, empty server list(ret=-4002) [2024-09-13 13:02:21.126153] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:380) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7D-0-0] [lt=27][errcode=-4002] refresh sever list failed(ret=-4002) [2024-09-13 13:02:21.126161] WDIAG [SHARE] runTimerTask (ob_alive_server_tracer.cpp:255) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7D-0-0] [lt=8][errcode=-4002] refresh alive server list failed(ret=-4002) [2024-09-13 13:02:21.129268] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC76-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.130419] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB216F-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.130914] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2173-0-0] [lt=25][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.131186] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2174-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.131650] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2178-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.131883] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2179-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.132241] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB217D-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.132483] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB217E-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.132749] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2182-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.132943] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2183-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.133470] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2187-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.133976] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:21.133999] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:21.134008] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:21.134018] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:21.134025] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:21.134032] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:21.134039] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:21.134046] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:21.134050] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:21.134056] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:21.134060] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:21.134068] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:21.134072] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:21.134076] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:21.134087] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=6][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:21.134095] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.134101] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.134108] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:21.134113] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server, ret=-5019) [2024-09-13 13:02:21.134121] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=6][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:21.134125] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.134138] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:21.134153] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.134159] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.134169] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:21.134192] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=7][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:21.134201] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7D-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.134207] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19878][ServerGTimer][T0][YB42AC103323-000621F921960C7D-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, aret=-5019, ret=-5019) [2024-09-13 13:02:21.134213] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:21.134218] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:21.134223] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:21.134231] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203741133687, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:21.134240] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:21.134308] WDIAG [SHARE] refresh (ob_all_server_tracer.cpp:568) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] fail to get servers_info(ret=-5019, ret="OB_TABLE_NOT_EXIST", GCTX.sql_proxy_=0x55a386ae7408) [2024-09-13 13:02:21.134313] WDIAG [SHARE] runTimerTask (ob_all_server_tracer.cpp:626) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] fail to refresh all server map(ret=-5019) [2024-09-13 13:02:21.138640] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.138914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.138934] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.138941] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.138949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.138963] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741138962, replica_locations:[]}) [2024-09-13 13:02:21.138977] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.138998] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.139004] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.139020] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.139059] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549016176, cache_obj->added_lc()=false, cache_obj->get_object_id()=103, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.140070] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.140268] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.140291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.140299] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.140313] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.140328] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741140327, replica_locations:[]}) [2024-09-13 13:02:21.140377] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1121169, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.149713] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A63-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741149319) [2024-09-13 13:02:21.149758] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.149744] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A63-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741149319}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.149951] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=191][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741149752) [2024-09-13 13:02:21.149959] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203740949758, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.149977] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.149985] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.149990] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741149967) [2024-09-13 13:02:21.151594] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=34] PNIO [ratelimit] time: 1726203741151592, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007614, add_bytes: 0 [2024-09-13 13:02:21.156514] INFO [SQL] check_session_leak (ob_sql_session_mgr.cpp:620) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8] get current session count(used_session_count=7, hold_session_count=7, session_leak_count_threshold=100) [2024-09-13 13:02:21.180579] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.180895] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.180925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.180934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.180943] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.180957] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741180955, replica_locations:[]}) [2024-09-13 13:02:21.180972] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.180997] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.181006] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.181033] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.181084] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549058200, cache_obj->added_lc()=false, cache_obj->get_object_id()=104, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.182112] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.182314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.182333] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.182339] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.182346] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.182354] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741182353, replica_locations:[]}) [2024-09-13 13:02:21.182403] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1079142, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.187254] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782DB-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.195929] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.196562] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.197784] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.200111] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.200735] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20107][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] init point thread id with(&point=0x55a3873cb740, idx_=3723, point=[thread id=20107, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20107) [2024-09-13 13:02:21.200772] WDIAG [OCCAM] dump_statistics (ob_vtable_event_recycle_buffer.h:144) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C7D-0-0] [lt=26][errcode=-4006] not init(ret=-4006) [2024-09-13 13:02:21.200805] INFO [MDS] dump_map_holding_item (mds_tenant_service.cpp:468) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C7D-0-0] [lt=10] finish scan map holding items(scan_cnt=0) [2024-09-13 13:02:21.200824] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C7E-0-0] [lt=10] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.201282] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.203995] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.205045] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.206109] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=32] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:21.208645] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.209685] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.214287] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.215431] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.221198] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.222349] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.223684] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.223978] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.223999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.224006] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.224014] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.224028] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741224027, replica_locations:[]}) [2024-09-13 13:02:21.224044] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.224068] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.224077] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.224100] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.224156] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549101273, cache_obj->added_lc()=false, cache_obj->get_object_id()=105, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.225282] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.225546] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.225574] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.225585] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.225588] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:21.225596] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.225613] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741225612, replica_locations:[]}) [2024-09-13 13:02:21.225617] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=10398720) [2024-09-13 13:02:21.225670] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1035875, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.228579] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=17] gc stale ls task succ [2024-09-13 13:02:21.228958] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.230161] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.231723] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.232030] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.232046] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.232060] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.232067] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.232095] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=16, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:21.232188] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=0] server is initiating(server_id=0, local_seq=17, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:21.232972] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=12] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:21.233329] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=16] table not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, table_name.ptr()="data_size:16, data:5F5F616C6C5F6D657267655F696E666F", ret=-5019) [2024-09-13 13:02:21.233353] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=22][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-09-13 13:02:21.233362] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=9][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_merge_info, db_name=oceanbase) [2024-09-13 13:02:21.233369] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_merge_info) [2024-09-13 13:02:21.233376] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:21.233380] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:21.233388] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] Table 'oceanbase.__all_merge_info' doesn't exist [2024-09-13 13:02:21.233402] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=13][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:21.233406] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:21.233412] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:21.233416] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:21.233420] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:21.233424] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:21.233421] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:21.233428] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:21.233446] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:21.233433] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=11][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:21.233453] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.233455] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=21][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:21.233459] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.233462] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:21.233463] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:21.233468] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:21.233468] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_merge_info WHERE tenant_id = '1', ret=-5019) [2024-09-13 13:02:21.233474] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:21.233474] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:21.233480] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:21.233485] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:21.233489] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:21.233496] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:21.233485] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=10][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.233500] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:21.233503] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:21.233509] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=5][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:21.233504] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=15][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:21.233513] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:21.233517] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.233521] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:21.233522] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.233526] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:21.233526] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.233532] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.233537] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:21.233541] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=3][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:21.233546] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:21.233551] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.233564] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:21.233566] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=12][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:21.233571] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.233577] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=8][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.233576] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7D-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, aret=-5019, ret=-5019) [2024-09-13 13:02:21.233581] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=3][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.233581] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:21.233584] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:21.233586] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:21.233591] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:21.233598] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:21.233604] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.233604] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203741233105, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:21.233609] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:21.233610] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:21.233614] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:21.233618] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:21.233618] WDIAG [SHARE] load_global_merge_info (ob_global_merge_table_operator.cpp:49) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, meta_tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:21.233622] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:21.233627] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203741233217, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:21.233641] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=14][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:21.233646] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:21.233696] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:21.233672] WDIAG [STORAGE] refresh_merge_info (ob_tenant_freeze_info_mgr.cpp:890) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] failed to load global merge info(ret=-5019, ret="OB_TABLE_NOT_EXIST", global_merge_info={tenant_id:1, cluster:{name:"cluster", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, frozen_scn:{name:"frozen_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, global_broadcast_scn:{name:"global_broadcast_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, last_merged_scn:{name:"last_merged_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, is_merge_error:{name:"is_merge_error", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_status:{name:"merge_status", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, error_type:{name:"error_type", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, suspend_merging:{name:"suspend_merging", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_start_time:{name:"merge_start_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, last_merged_time:{name:"last_merged_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}}) [2024-09-13 13:02:21.233708] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1005) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=35][errcode=-5019] fail to refresh merge info(tmp_ret=-5019, tmp_ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:21.233724] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=27][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:21.233732] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=7][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:21.233741] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=8][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:21.233748] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=18, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:21.233750] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:21.233757] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=6][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:21.233767] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C7F-0-0] [lt=9][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:21.234857] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14037027226, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:21.235610] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:21.235757] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.236052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.236077] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.236083] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.236090] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.236099] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741236098, replica_locations:[]}) [2024-09-13 13:02:21.236145] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1997576, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.236225] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.236420] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.236572] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=22][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:21.236588] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:21.236597] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:21.236607] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:21.236643] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=222][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.236648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.236659] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.236666] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741236666, replica_locations:[]}) [2024-09-13 13:02:21.236684] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.236703] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.236712] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.236728] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.236789] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549113878, cache_obj->added_lc()=false, cache_obj->get_object_id()=107, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.237584] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.237612] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.237842] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.237858] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.237864] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.237871] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.237887] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741237886, replica_locations:[]}) [2024-09-13 13:02:21.237932] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1995789, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.238979] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.239091] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.239313] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.239331] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.239337] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.239344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.239351] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741239350, replica_locations:[]}) [2024-09-13 13:02:21.239364] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.239381] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.239389] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.239411] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.239447] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549116558, cache_obj->added_lc()=false, cache_obj->get_object_id()=108, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.240117] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.240365] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.240384] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.240390] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.240402] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.240410] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741240409, replica_locations:[]}) [2024-09-13 13:02:21.240450] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993271, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.242640] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.242944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.242962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.242968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.242984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.242992] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741242992, replica_locations:[]}) [2024-09-13 13:02:21.243005] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.243021] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.243028] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.243043] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.243068] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549120189, cache_obj->added_lc()=false, cache_obj->get_object_id()=109, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.243773] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.244029] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.244046] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.244052] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.244071] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.244079] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741244078, replica_locations:[]}) [2024-09-13 13:02:21.244112] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1989608, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.247267] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.247448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.247625] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.247646] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.247652] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.247661] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.247672] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741247672, replica_locations:[]}) [2024-09-13 13:02:21.247685] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.247699] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.247715] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.247737] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.247761] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549124883, cache_obj->added_lc()=false, cache_obj->get_object_id()=110, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.248395] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.248653] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.248669] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.248675] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.248683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.248691] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741248690, replica_locations:[]}) [2024-09-13 13:02:21.248725] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1984995, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.248983] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=11][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:21.248995] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.249827] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.249831] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A64-0-0] [lt=33][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741249385) [2024-09-13 13:02:21.249848] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.249855] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741249815) [2024-09-13 13:02:21.249851] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A64-0-0] [lt=19][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741249385}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.249866] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:21.249899] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.249903] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.249907] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741249896) [2024-09-13 13:02:21.252885] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.253181] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.253249] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=65][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.253258] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.253266] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.253274] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741253274, replica_locations:[]}) [2024-09-13 13:02:21.253286] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.253303] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.253314] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.253334] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.253367] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549130486, cache_obj->added_lc()=false, cache_obj->get_object_id()=111, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.254052] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.254314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.254342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.254351] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.254364] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.254374] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741254373, replica_locations:[]}) [2024-09-13 13:02:21.254419] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1979301, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.257740] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=7] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:21.257769] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=27][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:21.257780] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:21.257787] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:21.257794] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:21.257798] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:21.257804] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:21.257809] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:21.257819] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=10][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:21.257829] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=8][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:21.257838] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=8][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:21.257847] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=8][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:21.257857] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:21.257863] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:21.257869] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=5][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:21.257895] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=26][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:21.257901] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=5][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:21.257915] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:21.257925] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.257936] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.257946] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:21.257958] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=11][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:21.257969] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=10][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:21.257980] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=10][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.257997] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=13][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:21.258014] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=14][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.258024] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:21.258029] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:21.258052] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=5][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:21.258085] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7E-0-0] [lt=32][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.258093] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C7E-0-0] [lt=6][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:21.258104] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:21.258180] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=75][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:21.258213] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:21.258229] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203741257427, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:21.258243] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:21.258254] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:21.258336] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=11] ls blacklist refresh finish(cost_time=1879) [2024-09-13 13:02:21.258573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.259643] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.259886] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.259938] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=47][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.259958] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.259967] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.260002] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.260025] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741260024, replica_locations:[]}) [2024-09-13 13:02:21.260065] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=38] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.260086] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.260096] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.260117] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.260148] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549137268, cache_obj->added_lc()=false, cache_obj->get_object_id()=112, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.261022] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.261668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.261687] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.261693] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.261701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.261715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741261714, replica_locations:[]}) [2024-09-13 13:02:21.261770] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1971950, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.265844] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:21.267849] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.267934] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.268142] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.268162] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.268168] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.268178] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.268187] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741268186, replica_locations:[]}) [2024-09-13 13:02:21.268200] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.268219] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.268228] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.268289] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.268305] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.268311] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.268320] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.268330] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741268330, replica_locations:[]}) [2024-09-13 13:02:21.268342] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.268356] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.268364] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.268379] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.268416] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549145536, cache_obj->added_lc()=false, cache_obj->get_object_id()=113, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.268255] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.269221] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.269250] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=965][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549146367, cache_obj->added_lc()=false, cache_obj->get_object_id()=106, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.269970] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.269991] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.270007] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.270016] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.270026] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.270038] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741270037, replica_locations:[]}) [2024-09-13 13:02:21.270088] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1963632, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.270217] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.270227] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.270233] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.270239] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.270245] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=3] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741270245, replica_locations:[]}) [2024-09-13 13:02:21.270274] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=991271, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.270378] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.271689] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.277275] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.277545] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.277568] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.277578] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.277591] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.277600] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741277600, replica_locations:[]}) [2024-09-13 13:02:21.277611] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.277632] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.277639] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.277669] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.277702] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549154822, cache_obj->added_lc()=false, cache_obj->get_object_id()=114, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.278477] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.278728] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.278743] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.278749] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.278769] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.278777] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741278777, replica_locations:[]}) [2024-09-13 13:02:21.278824] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1954896, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.283286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.284550] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.287074] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.287341] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.287363] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.287370] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.287379] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.287390] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741287389, replica_locations:[]}) [2024-09-13 13:02:21.287401] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.287448] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.287457] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.287478] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.287518] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549164637, cache_obj->added_lc()=false, cache_obj->get_object_id()=116, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.288536] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.288741] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.288761] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.288768] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.288775] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.288788] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741288786, replica_locations:[]}) [2024-09-13 13:02:21.288849] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1944871, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.297091] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.298071] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.298317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=44][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.298338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.298345] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.298352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.298366] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741298365, replica_locations:[]}) [2024-09-13 13:02:21.298377] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.298417] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.298427] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.298422] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.298504] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.298549] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549175667, cache_obj->added_lc()=false, cache_obj->get_object_id()=117, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.299495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.299703] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.299721] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.299728] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.299734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.299743] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741299742, replica_locations:[]}) [2024-09-13 13:02:21.299788] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1933933, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.310018] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.310239] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.310261] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.310268] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.310276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.310288] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741310286, replica_locations:[]}) [2024-09-13 13:02:21.310313] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.310336] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.310344] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.310366] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.310413] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549187529, cache_obj->added_lc()=false, cache_obj->get_object_id()=118, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.311455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.311650] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.311667] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.311674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.311684] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.311692] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741311692, replica_locations:[]}) [2024-09-13 13:02:21.311746] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1921974, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.311921] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.313238] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.313418] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.313709] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.313727] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.313734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.313744] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.313756] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741313756, replica_locations:[]}) [2024-09-13 13:02:21.313769] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.313789] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.313798] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.313822] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.313896] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549190995, cache_obj->added_lc()=false, cache_obj->get_object_id()=115, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.314717] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.314958] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.314980] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.314992] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.315004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.315019] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741315018, replica_locations:[]}) [2024-09-13 13:02:21.315073] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1] will sleep(sleep_us=44000, remain_us=946473, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.320592] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [20019][Blacklist][T0][Y0-0000000000000000-0-0] [lt=14] blacklist_loop exec finished(cost_time=17, is_enabled=true, send_cnt=0) [2024-09-13 13:02:21.322943] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.323210] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.323228] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.323234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.323241] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.323251] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741323250, replica_locations:[]}) [2024-09-13 13:02:21.323264] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.323280] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.323289] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.323324] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.323363] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549200481, cache_obj->added_lc()=false, cache_obj->get_object_id()=119, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.324257] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.324485] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.324502] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.324508] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.324516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.324525] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741324524, replica_locations:[]}) [2024-09-13 13:02:21.324573] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1909148, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.327765] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.328077] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:21.328097] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:21.328106] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:21.328106] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C8D-0-0] [lt=19][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203741328052}) [2024-09-13 13:02:21.329104] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.336775] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.337024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.337057] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.337064] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.337072] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.337085] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741337084, replica_locations:[]}) [2024-09-13 13:02:21.337099] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.337122] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.337130] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.337151] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.337191] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549214309, cache_obj->added_lc()=false, cache_obj->get_object_id()=121, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.338365] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.338608] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.338651] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=42][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.338661] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.338672] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.338689] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741338688, replica_locations:[]}) [2024-09-13 13:02:21.338749] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=13000, remain_us=1894972, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.344662] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.346005] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.348168] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=27] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:21.349896] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:21.349916] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.349930] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741349888) [2024-09-13 13:02:21.349945] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741149965, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.349967] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.349975] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.349980] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741349952) [2024-09-13 13:02:21.350036] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A65-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741349460) [2024-09-13 13:02:21.350075] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.350079] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.350082] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741350072) [2024-09-13 13:02:21.350069] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20290][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A65-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741349460}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.352005] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=39][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.352318] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.352339] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.352355] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741352354, replica_locations:[]}) [2024-09-13 13:02:21.352373] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.352399] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.352413] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.352459] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.352507] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549229624, cache_obj->added_lc()=false, cache_obj->get_object_id()=122, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.353963] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.354048] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B39-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:21.354070] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B39-0-0] [lt=21][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203741353588], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:21.354235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.354257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.354273] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741354272, replica_locations:[]}) [2024-09-13 13:02:21.354344] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1879376, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.354533] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC8-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.355262] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC8-0-0] [lt=10][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203741354950, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035152, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203741354546}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:21.355285] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC8-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.359329] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.359633] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.359663] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.359683] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741359682, replica_locations:[]}) [2024-09-13 13:02:21.359707] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.359742] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.359754] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.359797] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.359870] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549236972, cache_obj->added_lc()=false, cache_obj->get_object_id()=120, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.361327] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.361559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.361589] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.361605] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741361604, replica_locations:[]}) [2024-09-13 13:02:21.361719] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=899827, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.362527] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.363943] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.368615] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.368862] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.368892] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.368907] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741368906, replica_locations:[]}) [2024-09-13 13:02:21.368944] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=35] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.368970] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.368977] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.368998] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.369048] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549246165, cache_obj->added_lc()=false, cache_obj->get_object_id()=123, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.370320] INFO pktc_sk_new (pktc_sk_factory.h:78) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sk_new: s=0x2b07b0bf6048 [2024-09-13 13:02:21.370368] INFO pktc_sk_new (pktc_sk_factory.h:78) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=23] PNIO sk_new: s=0x2b07d9204048 [2024-09-13 13:02:21.370374] INFO pktc_do_connect (pktc_post.h:19) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=13] PNIO sk_new: sk=0x2b07b0bf6048, fd=132 [2024-09-13 13:02:21.370386] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] write client fd succ, fd:132, gid:0x100000001, need_send_negotiation:1 [2024-09-13 13:02:21.370394] INFO eloop_regist (eloop.c:47) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sock regist: 0x2b07b0bf6048 fd=132 [2024-09-13 13:02:21.370400] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock not ready: 0x2b07b0bf6048, fd=132 [2024-09-13 13:02:21.370401] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] sock regist: 0x2b07b3e21a50 fd=132 [2024-09-13 13:02:21.370471] INFO pktc_do_connect (pktc_post.h:19) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=49] PNIO sk_new: sk=0x2b07d9204048, fd=133 [2024-09-13 13:02:21.370489] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=7] [ussl] sock regist: 0x2b07b3e21b30 fd=133 [2024-09-13 13:02:21.370499] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=13] [ussl] write client fd succ, fd:133, gid:0x100000000, need_send_negotiation:1 [2024-09-13 13:02:21.370507] INFO eloop_regist (eloop.c:47) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=8] PNIO sock regist: 0x2b07d9204048 fd=133 [2024-09-13 13:02:21.370521] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=11] PNIO sock not ready: 0x2b07d9204048, fd=133 [2024-09-13 13:02:21.370572] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.370747] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] client send negotiation message succ, fd:132, addr:"172.16.51.35:55184", auth_method:NONE, gid:0x100000001 [2024-09-13 13:02:21.370763] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] give back fd to origin epoll succ, client_fd:132, client_epfd:72, event:0x8000000d, client_addr:"172.16.51.35:55184", need_close:0 [2024-09-13 13:02:21.370782] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] client send negotiation message succ, fd:133, addr:"172.16.51.35:38152", auth_method:NONE, gid:0x100000000 [2024-09-13 13:02:21.370788] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] give back fd to origin epoll succ, client_fd:133, client_epfd:65, event:0x8000000d, client_addr:"172.16.51.35:38152", need_close:0 [2024-09-13 13:02:21.370787] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19931][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock connect OK: 0x2b07b0bf6048 fd:132:local:"172.16.51.37:2882":remote:"172.16.51.37:2882" [2024-09-13 13:02:21.370815] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=8] PNIO sock connect OK: 0x2b07d9204048 fd:133:local:"172.16.51.36:2882":remote:"172.16.51.36:2882" [2024-09-13 13:02:21.371398] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.371467] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=66] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.371489] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741371488, replica_locations:[]}) [2024-09-13 13:02:21.371550] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1862171, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.381553] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.382977] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.386777] INFO pktc_sk_new (pktc_sk_factory.h:78) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] PNIO sk_new: s=0x2b07b0bf6a98 [2024-09-13 13:02:21.386850] INFO pktc_do_connect (pktc_post.h:19) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=19] PNIO sk_new: sk=0x2b07b0bf6a98, fd=134 [2024-09-13 13:02:21.386863] INFO ussl_loop_add_clientfd (ussl-loop.c:262) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=9] [ussl] write client fd succ, fd:134, gid:0x100000002, need_send_negotiation:1 [2024-09-13 13:02:21.386895] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=31] PNIO sock regist: 0x2b07b0bf6a98 fd=134 [2024-09-13 13:02:21.386902] INFO pktc_sk_check_connect (pktc_sk_factory.h:17) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO sock not ready: 0x2b07b0bf6a98, fd=134 [2024-09-13 13:02:21.386894] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] sock regist: 0x2b07b3e20740 fd=135 [2024-09-13 13:02:21.386910] INFO ussl_on_accept (ussl_listenfd.c:39) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=12] [ussl] accept new connection, fd:135, src_addr:172.16.51.35:50110 [2024-09-13 13:02:21.386928] INFO ussl_eloop_regist (ussl_eloop.c:41) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=6] [ussl] sock regist: 0x2b07b3e21a50 fd=134 [2024-09-13 13:02:21.386949] INFO handle_client_writable_event (handle-event.c:125) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=3] [ussl] client send negotiation message succ, fd:134, addr:"172.16.51.35:50110", auth_method:NONE, gid:0x100000002 [2024-09-13 13:02:21.386961] INFO epoll_unregist_and_give_back (handle-event.c:63) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] [ussl] give back fd to origin epoll succ, client_fd:134, client_epfd:79, event:0x8000000d, client_addr:"172.16.51.35:50110", need_close:0 [2024-09-13 13:02:21.386972] INFO acceptfd_handle_first_readable_event (handle-event.c:411) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=5] [ussl] auth mothod is NONE, the fd will be dispatched, fd:135, src_addr:172.16.51.35:50110 [2024-09-13 13:02:21.386977] INFO dispatch_accept_fd_to_certain_group (group.c:696) [19929][ussl_loop][T0][Y0-0000000000000000-0-0] [lt=4] PNIO dispatch fd to certain group, fd:135, gid:0x100000002 [2024-09-13 13:02:21.386995] INFO pkts_sk_init (pkts_sk_factory.h:23) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=4] PNIO set pkts_sk_t sock_id s=0x2b07b0bf74e8, s->id=65532 [2024-09-13 13:02:21.387002] INFO pkts_sk_new (pkts_sk_factory.h:51) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO sk_new: s=0x2b07b0bf74e8 [2024-09-13 13:02:21.387011] INFO eloop_regist (eloop.c:47) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=3] PNIO sock regist: 0x2b07b0bf74e8 fd=135 [2024-09-13 13:02:21.387022] INFO on_accept (listenfd.c:39) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=7] PNIO accept new connection, ns=0x2b07b0bf74e8, fd=fd:135:local:"172.16.51.35:50110":remote:"172.16.51.35:50110" [2024-09-13 13:02:21.387032] INFO pktc_sk_check_connect (pktc_sk_factory.h:25) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=6] PNIO sock connect OK: 0x2b07b0bf6a98 fd:134:local:"172.16.51.35:2882":remote:"172.16.51.35:2882" [2024-09-13 13:02:21.387123] WDIAG listenfd_handle_event (listenfd.c:71) [19932][pnio1][T0][Y0-0000000000000000-0-0] [lt=6][errcode=0] PNIO do_accept failed, err=11, errno=11, fd=-1 [2024-09-13 13:02:21.387153] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.387244] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.387267] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.387285] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741387284, replica_locations:[]}) [2024-09-13 13:02:21.387302] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.387328] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.387338] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.387361] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.387417] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549264534, cache_obj->added_lc()=false, cache_obj->get_object_id()=125, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.388855] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.389145] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.389163] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.389176] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741389174, replica_locations:[]}) [2024-09-13 13:02:21.389227] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1844493, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.401593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.402979] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.405431] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.405766] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.405789] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.405803] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741405802, replica_locations:[]}) [2024-09-13 13:02:21.405819] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.405850] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.405859] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.405896] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.405941] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549283059, cache_obj->added_lc()=false, cache_obj->get_object_id()=126, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.406908] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.407275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4018] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:21.407298] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.407309] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.407373] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.407430] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741407429, replica_locations:[]}) [2024-09-13 13:02:21.407462] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=30] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.407484] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.407492] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.407513] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.407566] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549284678, cache_obj->added_lc()=false, cache_obj->get_object_id()=124, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.407568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.407580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.407590] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741407589, replica_locations:[]}) [2024-09-13 13:02:21.407641] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1826080, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.408493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.408873] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.408901] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.408912] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741408912, replica_locations:[]}) [2024-09-13 13:02:21.408967] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=852579, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.418978] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=28][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:24, tid:19944}, {errcode:-4721, dropped:1601, tid:19944}]) [2024-09-13 13:02:21.419448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690058-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.422594] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.424076] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.424849] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=45][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.425131] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.425157] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.425176] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.425190] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.425204] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741425203, replica_locations:[]}) [2024-09-13 13:02:21.425219] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.425239] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:17, local_retry_times:17, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.425257] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.425266] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.425280] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.425291] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.425301] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.425318] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.425338] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.425395] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549302512, cache_obj->added_lc()=false, cache_obj->get_object_id()=127, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.426424] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.426461] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.426567] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.426920] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.426937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.426943] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.426950] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.426962] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741426961, replica_locations:[]}) [2024-09-13 13:02:21.426997] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=33][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.427008] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.427017] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.427033] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:21.427044] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:21.427055] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:21.427073] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:21.427083] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.427091] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.427097] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:21.427110] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:21.427114] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:21.427124] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:21.427133] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:21.427140] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:21.427144] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:21.427151] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:21.427155] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:21.427159] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:21.427169] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:21.427178] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.427187] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:21.427199] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:21.427207] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:21.427215] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=18, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.427233] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] will sleep(sleep_us=18000, remain_us=1806488, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.427864] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.428635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.429969] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.430735] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=20][errcode=0] server is initiating(server_id=0, local_seq=19, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:21.431622] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.431650] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:21.432810] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.435293] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=40] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14032832922, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:21.435461] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.436535] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.440213] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.441278] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.444591] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.445429] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.445730] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.445754] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.445767] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.445778] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.445791] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741445790, replica_locations:[]}) [2024-09-13 13:02:21.445805] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.445845] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=34][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:18, local_retry_times:18, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.445863] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.445869] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.445888] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.445895] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.445898] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.445915] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=40][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.445916] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.445925] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.445965] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549323083, cache_obj->added_lc()=false, cache_obj->get_object_id()=129, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.445996] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.447147] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.447187] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=38][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.447313] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.447398] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.447572] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.447587] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.447592] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.447599] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.447608] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741447608, replica_locations:[]}) [2024-09-13 13:02:21.447621] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.447629] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.447636] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.447648] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:21.447663] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:21.447668] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:21.447681] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:21.447689] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.447695] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.447703] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:21.447706] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:21.447710] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:21.447717] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:21.447724] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:21.447729] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:21.447742] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:21.447746] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:21.447751] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:21.447756] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:21.447767] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:21.447776] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.447782] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:21.447786] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:21.447792] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:21.447797] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=19, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.447816] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] will sleep(sleep_us=19000, remain_us=1785905, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.449931] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A66-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741449529) [2024-09-13 13:02:21.449963] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A66-0-0] [lt=29][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741449529}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.450027] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.450051] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741450021) [2024-09-13 13:02:21.450061] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741349952, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.450080] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.450088] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.450093] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741450069) [2024-09-13 13:02:21.453830] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.455001] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.455139] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.455456] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.455472] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.455478] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.455487] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.455496] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741455496, replica_locations:[]}) [2024-09-13 13:02:21.455511] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.455528] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:46, local_retry_times:46, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.455543] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.455549] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.455557] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.455562] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.455566] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.455588] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.455598] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.455639] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549332756, cache_obj->added_lc()=false, cache_obj->get_object_id()=128, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.456502] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.456531] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.456614] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.457020] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.457037] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.457042] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.457050] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.457061] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741457060, replica_locations:[]}) [2024-09-13 13:02:21.457074] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.457083] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.457092] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.457103] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:21.457108] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:21.457113] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:21.457126] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:21.457136] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.457141] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.457149] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:21.457154] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:21.457159] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:21.457168] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:21.457177] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:21.457181] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:21.457188] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:21.457192] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:21.457199] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:21.457204] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:21.457214] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:21.457223] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.457231] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:21.457236] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:21.457241] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:21.457246] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=47, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.457259] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] will sleep(sleep_us=47000, remain_us=804286, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.461568] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.462708] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.465931] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=14] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:21.467115] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.467423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.467470] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.467479] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.467493] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.467510] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741467510, replica_locations:[]}) [2024-09-13 13:02:21.467529] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.467552] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:19, local_retry_times:19, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.467572] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.467584] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.467586] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.467598] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.467608] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.467630] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.467650] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.467664] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.467711] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549344829, cache_obj->added_lc()=false, cache_obj->get_object_id()=130, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.468854] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.468889] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=34][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.468994] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.469013] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.469171] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.469186] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.469198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.469211] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.469238] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741469237, replica_locations:[]}) [2024-09-13 13:02:21.469254] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.469264] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.469273] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.469285] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:21.469293] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:21.469300] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:21.469314] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:21.469322] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.469330] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.469344] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:21.469351] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:21.469355] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:21.469363] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:21.469372] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:21.469377] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:21.469384] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:21.469387] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:21.469394] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:21.469400] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:21.469414] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:21.469423] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.469452] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:21.469459] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:21.469464] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:21.469471] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=20, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.469484] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] will sleep(sleep_us=20000, remain_us=1764236, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.470207] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.471248] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.472594] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:21.479759] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.480750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.489723] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.490000] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.490022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.490028] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.490037] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.490054] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741490053, replica_locations:[]}) [2024-09-13 13:02:21.490076] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.490093] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:20, local_retry_times:20, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.490111] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.490120] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.490129] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.490136] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.490139] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.490228] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=59][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.490224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.490245] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.490322] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549367438, cache_obj->added_lc()=false, cache_obj->get_object_id()=132, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.491184] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.491374] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.491410] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=34][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.491533] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.491557] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.491709] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.491725] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.491731] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.491741] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.491751] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741491750, replica_locations:[]}) [2024-09-13 13:02:21.491763] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.491770] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.491777] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.491792] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:21.491797] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:21.491805] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:21.491819] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:21.491827] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.491832] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.491839] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:21.491844] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:21.491848] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:21.491855] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:21.491864] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:21.491901] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=36][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:21.491906] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:21.491909] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:21.491917] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:21.491924] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:21.491936] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:21.491944] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.491952] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:21.491958] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:21.491967] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:21.491971] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=21, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.491990] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] will sleep(sleep_us=21000, remain_us=1741730, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.493073] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.501697] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.502982] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.504493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.504944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.504963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.504969] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.504978] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.504991] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741504990, replica_locations:[]}) [2024-09-13 13:02:21.505005] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.505025] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:47, local_retry_times:47, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.505040] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.505049] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.505060] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.505067] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.505071] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.505087] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.505097] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.505140] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549382257, cache_obj->added_lc()=false, cache_obj->get_object_id()=131, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.506086] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.506113] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.506215] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.506621] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.506637] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.506642] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.506649] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.506658] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741506658, replica_locations:[]}) [2024-09-13 13:02:21.506671] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.506681] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.506689] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.506701] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:21.506707] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:21.506715] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:21.506728] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:21.506739] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.506744] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:21.506752] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:21.506761] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:21.506765] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:21.506771] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:21.506778] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:21.506785] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:21.506789] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:21.506796] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:21.506801] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:21.506808] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:21.506819] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:21.506828] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:21.506837] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:21.506842] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:21.506849] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:21.506854] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=48, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:21.506871] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] will sleep(sleep_us=48000, remain_us=754674, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.513220] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.513507] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.513528] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.513534] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.513545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.513560] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741513559, replica_locations:[]}) [2024-09-13 13:02:21.513574] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.513593] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:21, local_retry_times:21, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:21.513609] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.513618] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.513630] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.513650] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:21.513653] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:21.513668] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:21.513679] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.513723] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549390840, cache_obj->added_lc()=false, cache_obj->get_object_id()=133, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.514505] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.514755] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.514782] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:21.514917] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.515089] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.515102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.515108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.515130] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.515142] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741515141, replica_locations:[]}) [2024-09-13 13:02:21.515155] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:21.515234] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1718487, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.515843] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.516593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.517967] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.528338] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.529531] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.537499] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.537809] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.537854] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.537863] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.537897] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=31] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.537919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741537918, replica_locations:[]}) [2024-09-13 13:02:21.537936] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.537966] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.537975] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.538001] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.538056] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549415170, cache_obj->added_lc()=false, cache_obj->get_object_id()=135, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.539263] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.539522] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.539540] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.539546] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.539554] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.539566] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741539566, replica_locations:[]}) [2024-09-13 13:02:21.539617] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1694103, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.542516] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.542958] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.543906] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.544367] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.550007] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A67-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741549618) [2024-09-13 13:02:21.550039] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A67-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741549618}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.550085] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.550108] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741550078) [2024-09-13 13:02:21.550120] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741450067, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.550144] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.550153] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.550159] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741550131) [2024-09-13 13:02:21.555077] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.555512] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.555532] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.555539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.555551] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.555566] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741555565, replica_locations:[]}) [2024-09-13 13:02:21.555582] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.555641] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.555657] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.555696] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.555748] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549432864, cache_obj->added_lc()=false, cache_obj->get_object_id()=134, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.556902] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.557272] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.557291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.557297] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.557307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.557320] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741557319, replica_locations:[]}) [2024-09-13 13:02:21.557374] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=704171, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.558833] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.560021] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.562822] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.563126] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.563149] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.563177] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.563186] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.563203] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741563201, replica_locations:[]}) [2024-09-13 13:02:21.563218] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.563247] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.563259] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.563286] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.563334] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549440451, cache_obj->added_lc()=false, cache_obj->get_object_id()=136, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.564184] WDIAG [SERVER] deliver_rpc_request (ob_srv_deliver.cpp:602) [19930][pnio1][T0][YB42AC103326-00062119EC0A117F-0-0] [lt=8][errcode=-5150] can't deliver request(req={packet:{hdr_:{checksum_:4051113742, pcode_:1316, hlen_:184, priority_:5, flags_:6151, tenant_id_:1001, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:1097921, timestamp:1726203741563791, dst_cluster_id:-1, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035158, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203741549827}, chid_:0, clen_:306, assemble:false, msg_count:0, payload:0}, type:0, group:0, sql_req_level:0, connection_phase:0, recv_timestamp_:1726203741564180, enqueue_timestamp_:0, request_arrival_time_:1726203741564180, trace_id_:Y0-0000000000000000-0-0}, ret=-5150) [2024-09-13 13:02:21.564263] WDIAG [SERVER] deliver (ob_srv_deliver.cpp:766) [19930][pnio1][T0][YB42AC103326-00062119EC0A117F-0-0] [lt=66][errcode=-5150] deliver rpc request fail(&req=0x2b07d9404098, ret=-5150) [2024-09-13 13:02:21.564595] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.564884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.564902] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.564908] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.564916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.564928] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741564927, replica_locations:[]}) [2024-09-13 13:02:21.564976] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1668745, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.569482] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.571126] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.575485] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.576665] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.589326] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.589634] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.589664] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.589679] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.589695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.589717] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741589715, replica_locations:[]}) [2024-09-13 13:02:21.589740] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.589810] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.589822] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.589861] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.589918] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549467035, cache_obj->added_lc()=false, cache_obj->get_object_id()=138, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.591048] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.591301] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.591318] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.591327] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.591337] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.591348] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741591347, replica_locations:[]}) [2024-09-13 13:02:21.591412] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=25000, remain_us=1642309, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.593168] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.594560] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.597746] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.599280] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.606592] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.607078] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.607096] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.607102] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.607114] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.607129] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741607128, replica_locations:[]}) [2024-09-13 13:02:21.607145] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.607168] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.607177] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.607201] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.607248] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549484365, cache_obj->added_lc()=false, cache_obj->get_object_id()=137, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.608296] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.608676] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.608700] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.608710] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.608725] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.608740] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741608739, replica_locations:[]}) [2024-09-13 13:02:21.608810] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=652735, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.612033] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.613354] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.616465] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=32] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:21.616614] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.616912] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.616950] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.616966] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.616990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.617013] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741617012, replica_locations:[]}) [2024-09-13 13:02:21.617045] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=29] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.617077] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.617092] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.617136] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.617197] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549494312, cache_obj->added_lc()=false, cache_obj->get_object_id()=139, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.618402] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.618652] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.618674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.618685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.618696] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.618716] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741618715, replica_locations:[]}) [2024-09-13 13:02:21.618778] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1614942, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.626858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.628342] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.632096] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.633380] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.635659] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14032832922, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:21.645112] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.645660] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.645684] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.645691] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.645700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.645719] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741645718, replica_locations:[]}) [2024-09-13 13:02:21.645736] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.645764] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.645773] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.645828] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.645890] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549522994, cache_obj->added_lc()=false, cache_obj->get_object_id()=141, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.647638] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.648135] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.648160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.648173] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.648187] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.648206] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741648204, replica_locations:[]}) [2024-09-13 13:02:21.648283] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=27000, remain_us=1585438, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.650152] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.650181] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741650143) [2024-09-13 13:02:21.650180] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A68-0-0] [lt=46][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741649688) [2024-09-13 13:02:21.650194] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741550129, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.650203] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A68-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741649688}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.650224] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.650231] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.650241] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741650205) [2024-09-13 13:02:21.650258] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.650265] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.650274] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741650254) [2024-09-13 13:02:21.652993] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.654485] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.656977] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.658417] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.659019] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.659519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.659545] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.659559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.659575] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.659594] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741659593, replica_locations:[]}) [2024-09-13 13:02:21.659616] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.659647] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.659660] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.659724] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.659786] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549536898, cache_obj->added_lc()=false, cache_obj->get_object_id()=140, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.661481] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.662005] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.662028] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.662038] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.662054] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.662071] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741662071, replica_locations:[]}) [2024-09-13 13:02:21.662143] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=599403, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.662508] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=13][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:21.662532] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=23][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=996082) [2024-09-13 13:02:21.662557] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=24][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:21.662569] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=10][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:21.662578] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED978DF1-0-0] [lt=8][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:21.666026] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:21.670414] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=23][errcode=0] server is initiating(server_id=0, local_seq=20, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:21.671299] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:21.675127] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=98][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.675481] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.675738] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.675754] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.675761] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.675769] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.675781] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741675780, replica_locations:[]}) [2024-09-13 13:02:21.675796] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.675820] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.675829] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.675864] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.675922] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549553040, cache_obj->added_lc()=false, cache_obj->get_object_id()=142, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.676598] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.676884] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=44][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.677959] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.677989] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.678000] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.678013] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.678028] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741678027, replica_locations:[]}) [2024-09-13 13:02:21.678108] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1555613, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.688717] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.695041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.698262] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.700241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.706202] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=21][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:21.706344] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.706646] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.706678] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.706685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.706697] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.706714] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741706713, replica_locations:[]}) [2024-09-13 13:02:21.706729] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.706752] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.706758] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.706780] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.706827] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549583944, cache_obj->added_lc()=false, cache_obj->get_object_id()=144, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.708086] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.708506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.708532] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.708539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.708546] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.708557] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741708556, replica_locations:[]}) [2024-09-13 13:02:21.708607] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1525114, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.713309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.713660] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.713873] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=212][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.713913] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=40] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.713924] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.713939] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741713938, replica_locations:[]}) [2024-09-13 13:02:21.713959] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.713984] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.713993] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.714013] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.714056] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549591173, cache_obj->added_lc()=false, cache_obj->get_object_id()=143, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.714994] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.715319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.715337] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.715344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.715351] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.715360] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741715359, replica_locations:[]}) [2024-09-13 13:02:21.715408] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=546137, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.722751] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.724820] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.725566] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.725662] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:21.725700] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=12478464) [2024-09-13 13:02:21.727399] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.737806] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.738218] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.738237] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.738244] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.738254] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.738269] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741738268, replica_locations:[]}) [2024-09-13 13:02:21.738292] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.738333] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.738347] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.738375] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.738420] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549615538, cache_obj->added_lc()=false, cache_obj->get_object_id()=145, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.739713] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.740003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.740022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.740028] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.740036] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.740048] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741740047, replica_locations:[]}) [2024-09-13 13:02:21.740111] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1493610, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.748370] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.749724] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.750202] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A69-0-0] [lt=9][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741749757) [2024-09-13 13:02:21.750231] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A69-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741749757}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.750250] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.750266] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741750244) [2024-09-13 13:02:21.750277] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741650202, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.750302] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.750313] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.750318] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741750289) [2024-09-13 13:02:21.758927] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.760490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.767604] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.768060] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.768078] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.768085] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.768094] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.768110] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741768109, replica_locations:[]}) [2024-09-13 13:02:21.768125] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.768148] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.768157] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.768179] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.768235] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549645344, cache_obj->added_lc()=false, cache_obj->get_object_id()=146, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.769240] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.769600] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.769617] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.769623] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.769630] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.769639] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741769638, replica_locations:[]}) [2024-09-13 13:02:21.769689] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=491857, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.770284] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.770517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.770535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.770544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.770554] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.770567] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741770567, replica_locations:[]}) [2024-09-13 13:02:21.770588] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.770608] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.770616] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.770633] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.770744] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549647861, cache_obj->added_lc()=false, cache_obj->get_object_id()=147, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.771677] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.771941] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.771959] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.771967] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.771977] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.771989] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741771989, replica_locations:[]}) [2024-09-13 13:02:21.772070] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1461651, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.774243] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.775576] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.793168] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.794775] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.801114] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.802491] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.803272] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.803555] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.803576] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.803584] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.803595] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.803612] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741803611, replica_locations:[]}) [2024-09-13 13:02:21.803628] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.803653] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.803662] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.803690] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.803736] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549680853, cache_obj->added_lc()=false, cache_obj->get_object_id()=149, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.804946] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.805181] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.805200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.805206] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.805214] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.805224] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741805223, replica_locations:[]}) [2024-09-13 13:02:21.805278] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1428442, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.822870] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.823288] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.823307] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.823313] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.823321] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.823335] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741823334, replica_locations:[]}) [2024-09-13 13:02:21.823350] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.823372] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.823381] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.823412] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.823481] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549700597, cache_obj->added_lc()=false, cache_obj->get_object_id()=148, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.824457] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.824776] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.824798] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.824804] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.824811] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.824821] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741824821, replica_locations:[]}) [2024-09-13 13:02:21.824886] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=436660, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.828288] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.828515] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:21.828557] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:21.828990] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.829709] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.830269] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.836003] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14032832922, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:21.837495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.837757] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.837777] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.837784] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.837806] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.837817] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741837816, replica_locations:[]}) [2024-09-13 13:02:21.837831] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.837852] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.837861] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.837893] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.837936] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549715053, cache_obj->added_lc()=false, cache_obj->get_object_id()=150, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.838970] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.839253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.839272] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.839278] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.839297] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.839310] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741839309, replica_locations:[]}) [2024-09-13 13:02:21.839354] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=33000, remain_us=1394366, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.841780] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5754, clean_start_pos=251658, clean_num=125829) [2024-09-13 13:02:21.850265] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6A-0-0] [lt=30][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741849838) [2024-09-13 13:02:21.850298] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6A-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741849838}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.850316] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:21.850334] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741850309) [2024-09-13 13:02:21.850342] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741750287, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:21.850363] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.850369] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.850373] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741850350) [2024-09-13 13:02:21.854425] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3A-0-0] [lt=18] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:21.854454] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3A-0-0] [lt=28][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203741854054], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:21.854845] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC9-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.855535] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DC9-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:21.857737] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.859189] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.864300] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.865796] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.866122] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:21.872630] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.872897] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.872993] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.873022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.873032] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.873103] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=68] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.873170] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741873169, replica_locations:[]}) [2024-09-13 13:02:21.873195] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.873201] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=21] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.873266] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.873278] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.873329] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.873409] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549750527, cache_obj->added_lc()=false, cache_obj->get_object_id()=152, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.874249] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:21.874670] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.875087] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.875113] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.875119] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.875135] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.875195] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=53] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741875194, replica_locations:[]}) [2024-09-13 13:02:21.875281] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=34000, remain_us=1358440, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.879090] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.879484] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.879508] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.879518] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.879530] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.879540] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741879539, replica_locations:[]}) [2024-09-13 13:02:21.879554] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.879577] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.879583] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.879606] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.879648] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549756765, cache_obj->added_lc()=false, cache_obj->get_object_id()=151, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.880657] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.880963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.880980] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.880987] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.880993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.881001] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741881001, replica_locations:[]}) [2024-09-13 13:02:21.881051] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=380494, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.887723] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.889281] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.901339] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.902744] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.909518] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.909813] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.909838] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.909846] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.909854] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.909867] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741909866, replica_locations:[]}) [2024-09-13 13:02:21.909894] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.909989] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.910027] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=36][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.910048] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.910155] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=35][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549787270, cache_obj->added_lc()=false, cache_obj->get_object_id()=153, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.911361] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.911596] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.911618] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.911624] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.911632] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.911641] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741911641, replica_locations:[]}) [2024-09-13 13:02:21.911775] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1321945, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.918866] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.920386] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.936234] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.936612] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.936634] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.936644] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.936655] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.936673] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741936672, replica_locations:[]}) [2024-09-13 13:02:21.936692] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.936720] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.936729] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.936767] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.936822] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549813936, cache_obj->added_lc()=false, cache_obj->get_object_id()=154, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.937871] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.938165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.938184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.938194] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.938209] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.938222] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741938220, replica_locations:[]}) [2024-09-13 13:02:21.938283] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=323263, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:21.939211] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.940754] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.946580] INFO [SERVER.OMT] recv_group_request (ob_tenant.cpp:1382) [19931][pnio1][T0][YB42AC103326-00062119D94365E3-0-0] [lt=8] create group successfully(id=1, group_id=19, group=0x2b07d6804030) [2024-09-13 13:02:21.946898] INFO [SHARE] get_next_sess_id (ob_active_session_guard.cpp:336) [20326][][T0][Y0-0000000000000000-0-0] [lt=0] succ to generate background session id(sessid=1833951035392) [2024-09-13 13:02:21.947018] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.947071] INFO register_pm (ob_page_manager.cpp:40) [20326][][T0][Y0-0000000000000000-0-0] [lt=29] register pm finish(ret=0, &pm=0x2b07d9656340, pm.get_tid()=20326, tenant_id=500) [2024-09-13 13:02:21.947106] INFO [SHARE] pre_run (ob_tenant_base.cpp:314) [20326][][T1][Y0-0000000000000000-0-0] [lt=21] tenant thread pre_run(MTL_ID()=1, ret=0, thread_count_=192) [2024-09-13 13:02:21.947106] INFO [SERVER.OMT] recv_group_request (ob_tenant.cpp:1413) [19931][pnio1][T0][YB42AC103326-00062119D94365E3-0-0] [lt=23] worker thread created(id()=1, group->group_id_=19) [2024-09-13 13:02:21.947118] INFO [RPC.OBRPC] th_init (ob_rpc_translator.cpp:33) [20326][][T1][Y0-0000000000000000-0-0] [lt=10] Init thread local success [2024-09-13 13:02:21.947133] INFO unregister_pm (ob_page_manager.cpp:50) [20326][][T1][Y0-0000000000000000-0-0] [lt=13] unregister pm finish(&pm=0x2b07d9656340, pm.get_tid()=20326) [2024-09-13 13:02:21.947148] INFO register_pm (ob_page_manager.cpp:40) [20326][][T1][Y0-0000000000000000-0-0] [lt=12] register pm finish(ret=0, &pm=0x2b07d9656340, pm.get_tid()=20326, tenant_id=1) [2024-09-13 13:02:21.947245] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=21, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:21.947421] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.947453] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.947460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.947470] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.947482] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741947481, replica_locations:[]}) [2024-09-13 13:02:21.947496] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.947516] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.947525] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.947545] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.947586] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549824703, cache_obj->added_lc()=false, cache_obj->get_object_id()=155, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.948178] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=13][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:21.948720] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.949032] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.949060] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.949067] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.949074] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.949086] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741949086, replica_locations:[]}) [2024-09-13 13:02:21.949134] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1284587, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.950260] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6B-0-0] [lt=33][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203741949910) [2024-09-13 13:02:21.950284] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6B-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203741949910}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:21.950310] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.950318] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:21.950323] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203741950300) [2024-09-13 13:02:21.950921] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.952384] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.978404] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.979938] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.983940] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.985309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.985386] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.985620] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.985652] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.985658] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.985667] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.985684] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741985683, replica_locations:[]}) [2024-09-13 13:02:21.985713] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=26] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.985735] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.985744] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.985763] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.985812] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549862929, cache_obj->added_lc()=false, cache_obj->get_object_id()=157, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.987101] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.987494] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.987521] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.987527] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.987535] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.987548] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741987547, replica_locations:[]}) [2024-09-13 13:02:21.987599] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1246122, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:21.994470] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.994817] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.994835] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.994841] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.994849] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.994858] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741994857, replica_locations:[]}) [2024-09-13 13:02:21.994872] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:21.994898] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:21.994908] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:21.994925] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:21.994965] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549872082, cache_obj->added_lc()=false, cache_obj->get_object_id()=156, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:21.995747] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:21.996038] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.996056] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:21.996063] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:21.996070] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:21.996077] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203741996077, replica_locations:[]}) [2024-09-13 13:02:21.996116] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=265430, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:22.018007] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.018511] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.019544] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.020363] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.024789] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.025182] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.025251] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=67][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.025299] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=47] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.025335] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=33] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.025374] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=29] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742025373, replica_locations:[]}) [2024-09-13 13:02:22.025444] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=56] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.025493] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.025523] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=29][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.025575] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.025642] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=29][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549902760, cache_obj->added_lc()=false, cache_obj->get_object_id()=158, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.026893] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.027281] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.027328] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=45][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.027361] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=32] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.027393] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=30] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.027428] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742027427, replica_locations:[]}) [2024-09-13 13:02:22.027537] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=38000, remain_us=1206184, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.042119] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14026541466, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:22.050373] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.050409] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742050365) [2024-09-13 13:02:22.050422] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203741850348, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.050447] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.050459] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.050466] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742050429) [2024-09-13 13:02:22.050530] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6C-0-0] [lt=29][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742049983) [2024-09-13 13:02:22.050572] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.050579] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.050564] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20288][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6C-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742049983}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.050583] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742050568) [2024-09-13 13:02:22.053105] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.053276] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.053554] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.053571] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.053577] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.053587] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.053598] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742053597, replica_locations:[]}) [2024-09-13 13:02:22.053612] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.053635] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.053643] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.053678] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.053723] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549930840, cache_obj->added_lc()=false, cache_obj->get_object_id()=159, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.054565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.054794] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.055045] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.055063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.055069] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.055076] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.055088] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742055087, replica_locations:[]}) [2024-09-13 13:02:22.055136] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=206410, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:22.060111] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.061930] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.065758] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.065999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.066046] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=45][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.066079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=32] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.066112] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=31] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.066147] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742066146, replica_locations:[]}) [2024-09-13 13:02:22.066211] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:22.066251] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=102] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.066297] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.066352] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=54][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.066395] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.066470] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549943587, cache_obj->added_lc()=false, cache_obj->get_object_id()=160, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.067564] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.067824] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.067887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.067922] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=34] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.067959] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=35] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.067993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742067993, replica_locations:[]}) [2024-09-13 13:02:22.068077] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1165644, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.072420] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:22.089128] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.090888] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=22][errcode=0] server is initiating(server_id=0, local_seq=22, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:22.090980] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.091746] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.092904] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=26] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.092929] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.093891] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=13] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.094361] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.094383] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=35] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.094398] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.094892] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=27] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.095045] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.095147] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=20] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.102543] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.104493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.107308] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.107675] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.107699] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.107706] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.107714] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.107730] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742107729, replica_locations:[]}) [2024-09-13 13:02:22.107745] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.107775] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.107784] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.107812] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.107858] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549984976, cache_obj->added_lc()=false, cache_obj->get_object_id()=162, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.109109] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.109346] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.109364] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.109370] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.109378] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.109389] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742109388, replica_locations:[]}) [2024-09-13 13:02:22.109448] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1124273, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.113365] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.113682] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.113705] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.113712] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.113719] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.113728] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742113728, replica_locations:[]}) [2024-09-13 13:02:22.113742] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.113760] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.113769] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.113788] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.113834] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6549990950, cache_obj->added_lc()=false, cache_obj->get_object_id()=161, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.114906] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.115182] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.115202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.115209] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.115216] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.115227] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742115226, replica_locations:[]}) [2024-09-13 13:02:22.115269] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=146276, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:22.118388] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=20] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:22.126682] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.128195] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.130703] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC77-0-0] [lt=19][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:2892016032, pcode_:1193, hlen_:184, priority_:3, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203742129506, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035178, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203742078192}, chid_:0, clen_:30, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:22.130747] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC77-0-0] [lt=43][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.131902] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=14] PNIO [ratelimit] time: 1726203742131901, bytes: 2649656, bw: 0.196019 MB/s, add_ts: 1007610, add_bytes: 207105 [2024-09-13 13:02:22.146110] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.148043] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.149677] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.149953] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.149979] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.149987] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.149995] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.150020] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742150019, replica_locations:[]}) [2024-09-13 13:02:22.150036] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.150067] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.150076] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.150098] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.150280] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550027261, cache_obj->added_lc()=false, cache_obj->get_object_id()=163, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.150634] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.150656] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742150628) [2024-09-13 13:02:22.150666] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742050429, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.150688] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.150697] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.150702] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742150675) [2024-09-13 13:02:22.151476] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.151689] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.151708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.151714] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.151768] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=52] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.151781] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742151780, replica_locations:[]}) [2024-09-13 13:02:22.151957] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1081764, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.152087] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=39] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:22.157340] INFO [SQL.EXE] run2 (ob_maintain_dependency_info_task.cpp:227) [19986][MaintainDepInfo][T0][Y0-0000000000000000-0-0] [lt=15] [ASYNC TASK QUEUE](queue_.size()=0, sys_view_consistent_.size()=0) [2024-09-13 13:02:22.159208] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=21] PNIO [ratelimit] time: 1726203742159206, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007614, add_bytes: 0 [2024-09-13 13:02:22.164808] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.166298] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.174474] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.174924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.174945] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.174952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.174960] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.174975] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742174974, replica_locations:[]}) [2024-09-13 13:02:22.174989] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.175018] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.175027] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.175062] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.175105] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550052222, cache_obj->added_lc()=false, cache_obj->get_object_id()=164, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.176126] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.176462] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.176482] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.176488] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.176496] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.176504] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742176504, replica_locations:[]}) [2024-09-13 13:02:22.176557] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=84989, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:22.189045] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782DC-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.190617] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.192286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.195157] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.195271] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.195289] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.195295] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.195307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.195320] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742195319, replica_locations:[]}) [2024-09-13 13:02:22.195334] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.195357] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.195368] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.195394] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.195449] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550072558, cache_obj->added_lc()=false, cache_obj->get_object_id()=165, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.196582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.196899] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.196916] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.196923] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.196930] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.196939] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742196939, replica_locations:[]}) [2024-09-13 13:02:22.196992] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1036729, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.197644] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20051][sql_nio0][T0][Y0-0000000000000000-0-0] [lt=14][errcode=0] server is initiating(server_id=0, local_seq=23, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:22.197667] INFO [RPC.OBMYSQL] create_scramble_string (obsm_conn_callback.cpp:61) [20051][sql_nio0][T0][Y0-0000000000000000-0-0] [lt=22] init thread_rand succ(ret=0) [2024-09-13 13:02:22.197674] INFO [RPC.OBMYSQL] sm_conn_build_handshake (obsm_conn_callback.cpp:121) [20051][sql_nio0][T0][Y0-0000000000000000-0-0] [lt=6] new mysql sessid created(conn.sessid_=3221225495, support_ssl=false) [2024-09-13 13:02:22.197697] INFO [RPC.OBMYSQL] init (obsm_conn_callback.cpp:141) [20051][sql_nio0][T0][Y0-0000000000000000-0-0] [lt=6] sm conn init succ(conn.sessid_=3221225495, sess.client_addr_="172.16.51.35:34402") [2024-09-13 13:02:22.197711] INFO [RPC.OBMYSQL] do_accept_one (ob_sql_nio.cpp:1089) [20051][sql_nio0][T0][Y0-0000000000000000-0-0] [lt=12] accept one succ(*s={this:0x2b07baffef30, session_id:3221225495, trace_id:Y0-0000000000000000-0-0, sql_handling_stage:-1, sql_initiative_shutdown:false, reader:{fd:136}, err:0, last_decode_time:0, pending_write_task:{buf:null, sz:0}, need_epoll_trigger_write:false, consume_size:0, pending_flag:0, may_handling_flag:true, handler_close_flag:false}) [2024-09-13 13:02:22.197811] INFO [SERVER] extract_user_tenant (obmp_connect.cpp:83) [20051][sql_nio0][T0][Y0-0000000000000000-0-0] [lt=16] username and tenantname(user_name=root, tenant_name=) [2024-09-13 13:02:22.197833] INFO [SERVER] dispatch_req (ob_srv_deliver.cpp:285) [20051][sql_nio0][T1][Y0-0000000000000000-0-0] [lt=9] succeed to dispatch to tenant mysql queue(tenant_id=1) [2024-09-13 13:02:22.197916] INFO [SERVER] verify_connection (obmp_connect.cpp:2037) [20238][T1_MysqlQueueTh][T1][Y0-000621F921C60C7D-0-0] [lt=4] server is initializing, ignore verify_ip_white_list(status=1, ret=0) [2024-09-13 13:02:22.198004] INFO load_privilege_info (obmp_connect.cpp:573) [20238][T1_MysqlQueueTh][T1][Y0-000621F921C60C7D-0-0] [lt=19] no tenant name set, use default tenant name(tenant_name=sys) [2024-09-13 13:02:22.198765] INFO alloc_array (ob_dchash.h:415) [20238][T1_MysqlQueueTh][T1][Y0-000621F921C60C7D-0-0] [lt=10] DCHash: alloc_array: N9oceanbase3sql15ObTenantUserKeyE this=0x55a386e18f00 array=0x2b07d5e32030 array_size=65536 prev_array=(nil) [2024-09-13 13:02:22.200627] INFO [SERVER] process (obmp_connect.cpp:514) [20238][T1_MysqlQueueTh][T1][Y0-000621F921C60C7D-0-0] [lt=16] MySQL LOGIN(direct_client_ip="172.16.51.35", client_ip=172.16.51.35, tenant_name=sys, tenant_id=1, user_name=root, host_name=%, sessid=3221225495, proxy_sessid=0, sess_create_time=0, from_proxy=false, from_java_client=false, from_oci_client=false, from_jdbc_client=false, capability=3908101, proxy_capability=0, use_ssl=false, c/s protocol="OB_MYSQL_CS_TYPE", autocommit=true, proc_ret=0, ret=0, conn->client_type_=3, conn->client_version_=0) [2024-09-13 13:02:22.203912] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.205431] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.208396] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=18] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:22.225754] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:22.225792] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=12478464) [2024-09-13 13:02:22.227149] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=8] ====== check clog disk timer task ====== [2024-09-13 13:02:22.227167] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=16] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:22.227195] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=23] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:22.228651] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=20] gc stale ls task succ [2024-09-13 13:02:22.231343] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.231781] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.232431] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.232764] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.232983] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.233059] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=18] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:22.233960] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C80-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.234236] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.234260] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.234270] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.234288] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.234323] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=11][errcode=0] server is initiating(server_id=0, local_seq=24, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:22.235409] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=11] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:22.235446] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=33][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:22.235466] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=20][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:22.235476] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:22.235486] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:22.235493] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:22.235500] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:22.235513] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=12][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:22.235520] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=8][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:22.235524] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:22.235528] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:22.235532] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:22.235540] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:22.235544] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:22.235563] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=12][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:22.235570] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.235579] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.235587] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:22.235592] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=5][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:22.235600] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:22.235608] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.235622] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:22.235637] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:22.235644] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:22.235648] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:22.235661] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:22.235674] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.235691] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=16][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:22.235697] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=5][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:22.235705] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:22.235711] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=6][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:22.235718] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203742235175, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:22.235728] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=10][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:22.235733] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:22.235789] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:22.235805] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=15][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:22.235817] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=11][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:22.235825] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:22.235832] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:22.235840] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=7][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:22.235844] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C80-0-0] [lt=4][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:22.235917] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.236751] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:22.236763] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.236765] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:22.236772] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:22.236779] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:22.237004] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.237024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.237031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.237038] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.237049] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742237048, replica_locations:[]}) [2024-09-13 13:02:22.237064] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.237081] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.237087] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.237103] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.237144] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550114260, cache_obj->added_lc()=false, cache_obj->get_object_id()=166, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.237686] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.238084] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.238299] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.238315] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.238321] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.238327] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.238337] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742238336, replica_locations:[]}) [2024-09-13 13:02:22.238379] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0] will sleep(sleep_us=23166, remain_us=23166, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203742261545) [2024-09-13 13:02:22.239244] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.239346] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.239367] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.239373] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.239380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.239392] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742239391, replica_locations:[]}) [2024-09-13 13:02:22.239403] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.239420] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:22.239449] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=26][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.239455] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.239469] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.239504] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550116622, cache_obj->added_lc()=false, cache_obj->get_object_id()=167, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.240394] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.240598] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.240619] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.240626] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.240633] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.240641] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742240640, replica_locations:[]}) [2024-09-13 13:02:22.240684] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=993036, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.242453] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14024444314, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:22.243972] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.245353] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.250547] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6D-0-0] [lt=10][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742250081) [2024-09-13 13:02:22.250579] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6D-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742250081}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.250595] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:22.250613] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:22.250633] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.250642] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.250647] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742250623) [2024-09-13 13:02:22.261634] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203742261546, ctx_timeout_ts=1726203742261546, worker_timeout_ts=1726203742261545, default_timeout=1000000) [2024-09-13 13:02:22.261660] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=25][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:22.261667] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:22.261677] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.261688] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:22.261702] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.261711] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.261740] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.261785] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550138902, cache_obj->added_lc()=false, cache_obj->get_object_id()=168, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.262661] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203742261545, ctx_timeout_ts=1726203742261545, worker_timeout_ts=1726203742261545, default_timeout=1000000) [2024-09-13 13:02:22.262680] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=18][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:22.262687] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:22.262695] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.262708] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.262720] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=12][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:22.262748] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=1][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:22.262763] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.262768] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.262783] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=4] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:22.262796] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:22.262810] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:22.262817] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.262823] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=4] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000494) [2024-09-13 13:02:22.262831] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:22.262838] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.262846] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=8][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:22.262850] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:22.262855] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:22.262864] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.262904] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550140023, cache_obj->added_lc()=false, cache_obj->get_object_id()=170, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.262957] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=10][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:22.262965] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:22.262970] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:22.262977] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:22.262985] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=7][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:22.262990] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=5][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:22.262995] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=4] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001452) [2024-09-13 13:02:22.263000] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=4][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:22.263008] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=7] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001474) [2024-09-13 13:02:22.263015] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=6][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:22.263020] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=4] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:22.263024] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7E-0-0] [lt=3][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:22.263030] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:22.263034] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:22.263055] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=4] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:22.263064] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=8] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:22.264682] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.264937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.264956] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.264962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.264970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.264983] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742264982, replica_locations:[]}) [2024-09-13 13:02:22.265027] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1998046, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.265102] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.265242] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.265283] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.265295] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.265302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.265307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.265315] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742265314, replica_locations:[]}) [2024-09-13 13:02:22.265323] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.265352] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.265357] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.265377] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.265412] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.265423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.265428] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.265433] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.265450] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.265459] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:22.265465] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:22.265469] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:22.265539] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.265540] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550142659, cache_obj->added_lc()=false, cache_obj->get_object_id()=171, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.265655] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.265667] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.265672] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.265678] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.265684] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742265684, replica_locations:[]}) [2024-09-13 13:02:22.265694] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:22.265702] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:22.265871] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:22.265909] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=37][errcode=-4638] [2024-09-13 13:02:22.265989] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.266201] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.266236] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266242] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:22.266251] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:22.266255] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:22.266286] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=9] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:22.266320] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.266488] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266501] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266520] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.266525] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266533] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:22.266539] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:22.266542] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:22.266601] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.266710] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.266751] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266759] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266764] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266770] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.266774] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266784] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:22.266789] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:22.266797] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:22.266802] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:22.266808] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:22.266814] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:22.266885] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266894] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.266899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.266904] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.266910] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742266910, replica_locations:[]}) [2024-09-13 13:02:22.266945] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=1000, remain_us=1996127, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.268063] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.268252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.268264] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.268270] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.268275] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.268282] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742268281, replica_locations:[]}) [2024-09-13 13:02:22.268294] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.268309] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.268314] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.268329] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.268352] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550145472, cache_obj->added_lc()=false, cache_obj->get_object_id()=172, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.269015] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.269205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.269221] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.269227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.269234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.269241] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742269240, replica_locations:[]}) [2024-09-13 13:02:22.269275] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993798, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.271493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.271680] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.271694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.271700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.271707] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.271715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742271714, replica_locations:[]}) [2024-09-13 13:02:22.271724] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.271738] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.271743] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.271762] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.271790] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550148910, cache_obj->added_lc()=false, cache_obj->get_object_id()=173, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.272511] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.272734] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.272748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.272754] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.272760] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.272768] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742272767, replica_locations:[]}) [2024-09-13 13:02:22.272803] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1990269, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.275979] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.276202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.276219] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.276226] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.276235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.276246] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742276245, replica_locations:[]}) [2024-09-13 13:02:22.276259] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.276276] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.276284] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.276308] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.276335] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550153453, cache_obj->added_lc()=false, cache_obj->get_object_id()=174, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.277043] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.277257] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.277273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.277279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.277286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.277293] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742277293, replica_locations:[]}) [2024-09-13 13:02:22.277328] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1985744, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.281508] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.281772] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.281786] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.281792] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.281801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.281813] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742281812, replica_locations:[]}) [2024-09-13 13:02:22.281825] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.281843] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.281850] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.281865] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.281900] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550159020, cache_obj->added_lc()=false, cache_obj->get_object_id()=175, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.282448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.282606] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.282835] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.282851] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.282857] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.282866] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.282885] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742282885, replica_locations:[]}) [2024-09-13 13:02:22.282921] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1980152, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.283903] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.284155] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.284304] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.284326] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.284333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.284340] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.284353] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742284352, replica_locations:[]}) [2024-09-13 13:02:22.284366] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.284385] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.284394] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.284413] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.284471] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550161589, cache_obj->added_lc()=false, cache_obj->get_object_id()=169, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.284820] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.285288] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.285501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.285516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.285528] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742285528, replica_locations:[]}) [2024-09-13 13:02:22.285566] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=948154, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.286052] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.288089] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.288365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.288381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.288390] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742288389, replica_locations:[]}) [2024-09-13 13:02:22.288403] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.288425] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.288432] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.288462] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.288489] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550165610, cache_obj->added_lc()=false, cache_obj->get_object_id()=176, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.289169] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.289352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.289381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.289392] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742289392, replica_locations:[]}) [2024-09-13 13:02:22.289460] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1973612, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.290707] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=22] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.294043] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.295645] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.296358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.296382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.296396] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742296395, replica_locations:[]}) [2024-09-13 13:02:22.296416] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.296452] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.296463] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.296486] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.296525] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550173642, cache_obj->added_lc()=false, cache_obj->get_object_id()=178, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.297481] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.297674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.297698] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.297710] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742297709, replica_locations:[]}) [2024-09-13 13:02:22.297762] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1965311, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.304936] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.305302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.305319] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.305329] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742305328, replica_locations:[]}) [2024-09-13 13:02:22.305343] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.305360] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.305368] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.305401] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.305431] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550182552, cache_obj->added_lc()=false, cache_obj->get_object_id()=179, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.306149] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.306373] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.306391] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.306399] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742306399, replica_locations:[]}) [2024-09-13 13:02:22.306443] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1956630, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.314624] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.314665] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=34][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.314885] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.314900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.314910] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742314909, replica_locations:[]}) [2024-09-13 13:02:22.314923] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.314940] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.314946] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.314960] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.314988] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550192108, cache_obj->added_lc()=false, cache_obj->get_object_id()=180, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.315700] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.315896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.315916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.315924] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742315924, replica_locations:[]}) [2024-09-13 13:02:22.315960] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=9000, remain_us=1947112, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.325145] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:22.325387] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.325404] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.325413] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742325413, replica_locations:[]}) [2024-09-13 13:02:22.325427] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.325454] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.325459] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.325478] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.325507] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550202628, cache_obj->added_lc()=false, cache_obj->get_object_id()=181, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.326460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.326482] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.326491] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742326491, replica_locations:[]}) [2024-09-13 13:02:22.326533] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=10000, remain_us=1936539, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.329072] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:22.329099] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:22.329092] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C92-0-0] [lt=17][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203742329042}) [2024-09-13 13:02:22.329968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.329990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.330005] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742330004, replica_locations:[]}) [2024-09-13 13:02:22.330021] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.330056] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.330067] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.330098] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.330147] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550207264, cache_obj->added_lc()=false, cache_obj->get_object_id()=177, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.331260] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.331294] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=33] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.331308] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742331307, replica_locations:[]}) [2024-09-13 13:02:22.331356] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=902364, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.332604] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=17] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:22.337006] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.337025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.337033] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742337033, replica_locations:[]}) [2024-09-13 13:02:22.337043] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.337058] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.337068] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.337082] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.337112] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550214232, cache_obj->added_lc()=false, cache_obj->get_object_id()=182, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.338151] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.338171] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.338179] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742338179, replica_locations:[]}) [2024-09-13 13:02:22.338213] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1924859, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.348262] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=23] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:22.349610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.349629] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.349639] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742349638, replica_locations:[]}) [2024-09-13 13:02:22.349649] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.349664] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.349670] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.349694] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.349733] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550226852, cache_obj->added_lc()=false, cache_obj->get_object_id()=184, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.350636] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:22.350654] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.350666] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742350631) [2024-09-13 13:02:22.350677] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742150675, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.350698] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.350707] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.350712] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742350687) [2024-09-13 13:02:22.350957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.350972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.350980] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742350979, replica_locations:[]}) [2024-09-13 13:02:22.351025] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1912048, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.354939] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3B-0-0] [lt=20] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:22.354958] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3B-0-0] [lt=18][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203742354506], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:22.355396] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCA-0-0] [lt=1][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.356113] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCA-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.363433] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.363466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=32] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.363476] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742363475, replica_locations:[]}) [2024-09-13 13:02:22.363493] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.363517] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.363523] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.363536] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.363566] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550240686, cache_obj->added_lc()=false, cache_obj->get_object_id()=185, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.364539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.364559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.364568] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742364567, replica_locations:[]}) [2024-09-13 13:02:22.364602] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1898471, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.376758] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.376782] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.376797] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742376796, replica_locations:[]}) [2024-09-13 13:02:22.376828] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.376852] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.376864] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.376898] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.376950] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550254065, cache_obj->added_lc()=false, cache_obj->get_object_id()=183, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.377768] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:22.377979] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.378003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.378018] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742378017, replica_locations:[]}) [2024-09-13 13:02:22.378036] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.378059] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.378068] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.378089] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.378124] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550255243, cache_obj->added_lc()=false, cache_obj->get_object_id()=186, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.378422] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.378463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=39] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.378476] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742378475, replica_locations:[]}) [2024-09-13 13:02:22.378546] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=855174, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.379317] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.379333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.379342] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742379341, replica_locations:[]}) [2024-09-13 13:02:22.379378] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=14000, remain_us=1883694, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.383721] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=17] ====== tenant freeze timer task ====== [2024-09-13 13:02:22.383751] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=18][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:22.393871] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.393896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.393907] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742393906, replica_locations:[]}) [2024-09-13 13:02:22.393920] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.393939] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.393945] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.393964] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.394000] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550271119, cache_obj->added_lc()=false, cache_obj->get_object_id()=188, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.395128] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.395144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.395153] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742395152, replica_locations:[]}) [2024-09-13 13:02:22.395194] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1867878, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.410818] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:22.410838] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.410846] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.410855] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742410854, replica_locations:[]}) [2024-09-13 13:02:22.410868] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.410895] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.410904] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.410939] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.410974] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550288092, cache_obj->added_lc()=false, cache_obj->get_object_id()=189, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.412074] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.412090] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.412098] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742412097, replica_locations:[]}) [2024-09-13 13:02:22.412139] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1850934, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.425083] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.425104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.425117] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742425117, replica_locations:[]}) [2024-09-13 13:02:22.425137] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.425217] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.425226] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.425261] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.425310] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550302425, cache_obj->added_lc()=false, cache_obj->get_object_id()=187, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.426757] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.426777] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.426797] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742426796, replica_locations:[]}) [2024-09-13 13:02:22.426853] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=806868, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.428545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.428561] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.428570] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742428569, replica_locations:[]}) [2024-09-13 13:02:22.428584] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.428602] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.428607] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.428622] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.428655] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550305773, cache_obj->added_lc()=false, cache_obj->get_object_id()=190, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.429566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.429581] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.429590] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742429589, replica_locations:[]}) [2024-09-13 13:02:22.429625] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1833447, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.442829] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14024444314, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:22.447223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.447238] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.447248] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742447247, replica_locations:[]}) [2024-09-13 13:02:22.447261] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.447279] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.447284] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.447304] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.447335] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550324454, cache_obj->added_lc()=false, cache_obj->get_object_id()=192, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.448409] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.448426] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.448446] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742448445, replica_locations:[]}) [2024-09-13 13:02:22.448493] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1814580, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.450700] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.450721] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742450695) [2024-09-13 13:02:22.450730] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742350687, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.450748] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.450756] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.450764] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742450737) [2024-09-13 13:02:22.450808] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6E-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742450214) [2024-09-13 13:02:22.450847] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.450835] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6E-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742450214}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.450854] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.450858] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742450844) [2024-09-13 13:02:22.466373] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:22.466974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.466994] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.467009] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742467007, replica_locations:[]}) [2024-09-13 13:02:22.467024] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.467046] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.467058] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.467084] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.467132] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550344248, cache_obj->added_lc()=false, cache_obj->get_object_id()=193, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.468237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.468257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.468270] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742468269, replica_locations:[]}) [2024-09-13 13:02:22.468317] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1794756, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.474330] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.474348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.474358] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742474358, replica_locations:[]}) [2024-09-13 13:02:22.474372] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.474389] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.474397] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.474417] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.474472] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550351588, cache_obj->added_lc()=false, cache_obj->get_object_id()=191, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.475785] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.475803] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.475812] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742475811, replica_locations:[]}) [2024-09-13 13:02:22.475853] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=48000, remain_us=757867, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.487988] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.488008] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.488020] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742488019, replica_locations:[]}) [2024-09-13 13:02:22.488032] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.488051] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.488057] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.488075] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.488118] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550365234, cache_obj->added_lc()=false, cache_obj->get_object_id()=194, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.489253] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.489273] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.489282] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742489282, replica_locations:[]}) [2024-09-13 13:02:22.489327] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1773745, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.510087] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.510109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.510120] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742510119, replica_locations:[]}) [2024-09-13 13:02:22.510132] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.510152] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.510157] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.510191] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.510235] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550387352, cache_obj->added_lc()=false, cache_obj->get_object_id()=196, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.511390] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.511411] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.511420] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742511419, replica_locations:[]}) [2024-09-13 13:02:22.511476] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1751597, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.520425] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=26][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:82, tid:20197}, {errcode:-4721, dropped:2014, tid:20197}]) [2024-09-13 13:02:22.524397] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.524420] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.524427] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.524449] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.524473] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742524472, replica_locations:[]}) [2024-09-13 13:02:22.524496] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.524511] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:48, local_retry_times:48, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.524527] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.524534] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.524542] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.524549] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.524553] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.524570] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.524580] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.524615] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550401734, cache_obj->added_lc()=false, cache_obj->get_object_id()=195, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.525552] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.525583] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=30][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.525985] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.526003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.526009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.526015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.526024] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742526023, replica_locations:[]}) [2024-09-13 13:02:22.526037] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.526045] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.526052] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.526067] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:22.526073] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:22.526080] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:22.526096] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:22.526110] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.526118] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.526126] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:22.526136] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:22.526144] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:22.526154] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.526165] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:22.526188] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:22.526192] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:22.526196] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:22.526201] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:22.526208] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:22.526219] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:22.526227] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.526232] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:22.526237] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:22.526242] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:22.526249] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=49, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.526266] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] will sleep(sleep_us=49000, remain_us=707454, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.533025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=90][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.533042] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.533049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.533056] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.533065] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742533065, replica_locations:[]}) [2024-09-13 13:02:22.533079] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.533096] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:21, local_retry_times:21, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.533110] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.533117] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.533124] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.533131] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.533135] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.533146] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.533156] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.533190] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550410308, cache_obj->added_lc()=false, cache_obj->get_object_id()=197, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.533965] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.533992] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.534299] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.534315] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.534321] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.534329] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.534340] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742534340, replica_locations:[]}) [2024-09-13 13:02:22.534353] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.534360] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.534366] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.534377] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:22.534382] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:22.534388] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:22.534401] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:22.534412] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.534417] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.534424] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:22.534428] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:22.534433] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:22.534446] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.534451] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:22.534455] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:22.534459] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:22.534464] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:22.534468] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:22.534472] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:22.534481] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:22.534490] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.534495] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:22.534500] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:22.534505] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:22.534513] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=22, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.534526] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] will sleep(sleep_us=22000, remain_us=1728546, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.550719] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6F-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742550293) [2024-09-13 13:02:22.550751] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A6F-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742550293}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.550768] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.550806] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742550762) [2024-09-13 13:02:22.550818] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742450736, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.550838] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.550846] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.550851] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742550826) [2024-09-13 13:02:22.556985] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.557005] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.557012] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.557019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.557030] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742557029, replica_locations:[]}) [2024-09-13 13:02:22.557047] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.557065] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:22, local_retry_times:22, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.557080] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.557089] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.557096] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.557103] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.557107] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.557137] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.557148] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.557184] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550434301, cache_obj->added_lc()=false, cache_obj->get_object_id()=199, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.559007] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.559031] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.559347] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.559359] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.559365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.559371] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.559379] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742559378, replica_locations:[]}) [2024-09-13 13:02:22.559417] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=35][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.559426] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.559432] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.559535] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=46][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:22.559548] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:22.559557] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:22.559570] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:22.559580] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.559586] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.559591] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:22.559595] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:22.559599] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:22.559606] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.559612] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:22.559616] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:22.559620] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:22.559623] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:22.559629] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:22.559639] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:22.559653] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:22.559664] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.559675] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:22.559683] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:22.559690] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:22.559695] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=23, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.559710] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] will sleep(sleep_us=23000, remain_us=1703362, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.575776] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.575798] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.575804] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.575812] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.575823] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742575822, replica_locations:[]}) [2024-09-13 13:02:22.575833] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.575849] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:49, local_retry_times:49, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.575868] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.575883] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.575894] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.575908] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.575911] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.575923] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.575933] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.575970] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550453088, cache_obj->added_lc()=false, cache_obj->get_object_id()=198, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.577107] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=33][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.577130] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.577466] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.577485] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.577491] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.577510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.577524] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742577523, replica_locations:[]}) [2024-09-13 13:02:22.577542] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.577557] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.577570] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.577588] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:22.577600] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:22.577612] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:22.577629] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:22.577640] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.577652] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.577657] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:22.577664] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:22.577668] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:22.577675] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.577682] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:22.577686] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:22.577693] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:22.577697] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:22.577702] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:22.577706] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:22.577717] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:22.577730] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.577735] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:22.577740] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:22.577745] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:22.577750] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=50, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.577764] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] will sleep(sleep_us=50000, remain_us=655957, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.583307] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.583334] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.583343] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.583352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.583368] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742583367, replica_locations:[]}) [2024-09-13 13:02:22.583388] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.583408] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:23, local_retry_times:23, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.583427] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.583447] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.583461] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.583471] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.583480] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.583497] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.583510] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.583560] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550460674, cache_obj->added_lc()=false, cache_obj->get_object_id()=200, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.584653] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.584690] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.585025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.585052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.585063] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.585074] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.585086] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742585085, replica_locations:[]}) [2024-09-13 13:02:22.585106] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.585121] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.585135] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.585152] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:22.585163] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:22.585193] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=29][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:22.585248] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=54][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:22.585262] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.585272] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.585282] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:22.585291] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:22.585302] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:22.585313] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.585325] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:22.585334] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:22.585344] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:22.585354] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:22.585360] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:22.585367] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:22.585384] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:22.585396] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.585406] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:22.585417] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:22.585428] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:22.585449] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=24, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.585469] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] will sleep(sleep_us=24000, remain_us=1677604, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.609947] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.609976] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.609983] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.609992] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.610004] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742610002, replica_locations:[]}) [2024-09-13 13:02:22.610016] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.610032] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:24, local_retry_times:24, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.610049] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.610151] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=101][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.610160] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.610167] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.610170] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.610209] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.610223] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.610266] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550487382, cache_obj->added_lc()=false, cache_obj->get_object_id()=202, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.611154] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.611183] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.611535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.611553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.611559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.611566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.611576] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742611575, replica_locations:[]}) [2024-09-13 13:02:22.611589] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.611597] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.611604] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.611616] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:22.611621] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:22.611629] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:22.611643] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:22.611654] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.611659] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:22.611667] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:22.611671] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:22.611675] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:22.611680] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:22.611690] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:22.611695] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:22.611699] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:22.611703] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:22.611709] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:22.611714] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:22.611726] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:22.611735] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:22.611741] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:22.611746] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:22.611784] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=37][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:22.611799] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=25, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:22.611816] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] will sleep(sleep_us=25000, remain_us=1651257, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.617183] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=39] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:22.620580] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=36][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4719, dropped:76, tid:20300}]) [2024-09-13 13:02:22.627969] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.628216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.628237] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.628254] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.628264] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.628279] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742628278, replica_locations:[]}) [2024-09-13 13:02:22.628312] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=31] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.628337] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:50, local_retry_times:50, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:22.628359] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.628371] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.628384] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.628394] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:22.628400] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:22.628423] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:22.628452] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=27][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.628505] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550505620, cache_obj->added_lc()=false, cache_obj->get_object_id()=201, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.629648] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.629677] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:22.629773] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.630266] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.630283] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.630291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.630316] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.630332] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742630331, replica_locations:[]}) [2024-09-13 13:02:22.630351] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:22.630408] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=603312, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.634001] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.635388] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.637038] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.637233] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.637252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.637261] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.637270] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.637282] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742637281, replica_locations:[]}) [2024-09-13 13:02:22.637300] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.637325] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.637336] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.637361] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.637414] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550514527, cache_obj->added_lc()=false, cache_obj->get_object_id()=203, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.638338] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.638527] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.638541] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.638547] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.638553] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.638561] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742638560, replica_locations:[]}) [2024-09-13 13:02:22.638603] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1624470, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.643167] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14024444314, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:22.649425] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.650844] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.650852] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A70-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742650366) [2024-09-13 13:02:22.650862] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742650837) [2024-09-13 13:02:22.650871] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742550825, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.650870] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A70-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742650366}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.650913] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.650919] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.650927] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742650900) [2024-09-13 13:02:22.650939] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.650946] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.650949] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742650937) [2024-09-13 13:02:22.650992] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.664381] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=17][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.664403] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=21][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=992121) [2024-09-13 13:02:22.664415] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=11][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:22.664423] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=7][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:22.664430] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF2-0-0] [lt=6][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:22.664827] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.665017] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCB-0-0] [lt=20][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203742664626, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035197, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203742663614}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:22.665059] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCB-0-0] [lt=41][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.665119] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.665137] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.665143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.665153] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.665168] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742665167, replica_locations:[]}) [2024-09-13 13:02:22.665205] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.665228] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.665237] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.665263] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.665304] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550542422, cache_obj->added_lc()=false, cache_obj->get_object_id()=205, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.665563] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCB-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.666215] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.666406] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.666423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.666432] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.666458] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=23][errcode=0] server is initiating(server_id=0, local_seq=25, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:22.666468] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=34] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.666481] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742666480, replica_locations:[]}) [2024-09-13 13:02:22.666530] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1596543, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.667404] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=15][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.669520] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] Cache replace map node details(ret=0, replace_node_count=0, replace_time=2962, replace_start_pos=188742, replace_num=62914) [2024-09-13 13:02:22.669537] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:22.681648] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.681920] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.681949] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.681959] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.681973] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.681988] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742681987, replica_locations:[]}) [2024-09-13 13:02:22.682010] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.682039] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.682051] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.682077] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.682129] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550559246, cache_obj->added_lc()=false, cache_obj->get_object_id()=204, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.683289] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.683497] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.683518] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.683527] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.683541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.683556] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742683555, replica_locations:[]}) [2024-09-13 13:02:22.683617] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=52000, remain_us=550104, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.688179] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.689582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.693713] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.693964] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.693982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.693990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.694005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.694018] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742694017, replica_locations:[]}) [2024-09-13 13:02:22.694037] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.694058] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.694068] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.694087] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.694128] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550571245, cache_obj->added_lc()=false, cache_obj->get_object_id()=207, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.695080] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.695282] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.695303] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.695309] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.695316] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.695327] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742695327, replica_locations:[]}) [2024-09-13 13:02:22.695371] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1567702, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.699622] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.700990] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.704386] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:22.723574] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.723948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.723967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.723974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.723984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.723994] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742723994, replica_locations:[]}) [2024-09-13 13:02:22.724005] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.724027] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.724036] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.724069] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.724112] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550601229, cache_obj->added_lc()=false, cache_obj->get_object_id()=209, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.725120] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.725315] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.725338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.725344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.725352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.725360] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742725359, replica_locations:[]}) [2024-09-13 13:02:22.725420] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1537652, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.725900] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:22.725930] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=13] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=14558208) [2024-09-13 13:02:22.735844] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.736064] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.736086] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.736093] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.736103] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.736117] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742736116, replica_locations:[]}) [2024-09-13 13:02:22.736140] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.736163] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.736172] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.736201] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.736248] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550613366, cache_obj->added_lc()=false, cache_obj->get_object_id()=208, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.737374] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.737597] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.737620] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.737630] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.737638] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.737648] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742737647, replica_locations:[]}) [2024-09-13 13:02:22.737711] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1] will sleep(sleep_us=53000, remain_us=496009, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.743150] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.744630] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.750678] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.750854] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A71-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742750442) [2024-09-13 13:02:22.750889] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A71-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742750442}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.750914] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.750926] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.750932] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742750901) [2024-09-13 13:02:22.752359] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.754603] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.754855] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.754882] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.754888] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.754899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.754909] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742754908, replica_locations:[]}) [2024-09-13 13:02:22.754922] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.754944] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.754953] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.754971] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.755011] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550632128, cache_obj->added_lc()=false, cache_obj->get_object_id()=210, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.755873] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.756146] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.756164] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.756170] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.756178] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.756186] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742756186, replica_locations:[]}) [2024-09-13 13:02:22.756230] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1506842, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.786463] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.787418] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.787465] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.787475] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.787489] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.787518] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742787517, replica_locations:[]}) [2024-09-13 13:02:22.787537] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.787566] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.787596] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=28][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.787630] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.787684] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550664800, cache_obj->added_lc()=false, cache_obj->get_object_id()=212, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.788992] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.789216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.789236] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.789243] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.789254] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.789267] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742789267, replica_locations:[]}) [2024-09-13 13:02:22.789336] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1473736, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.790851] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.791063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.791084] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.791093] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.791108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.791125] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742791125, replica_locations:[]}) [2024-09-13 13:02:22.791146] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.791174] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.791186] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.791222] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.791287] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550668402, cache_obj->added_lc()=false, cache_obj->get_object_id()=211, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.792490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.792855] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.792887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.792900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.792914] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.792928] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742792928, replica_locations:[]}) [2024-09-13 13:02:22.792984] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=440737, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.799238] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.800767] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.803111] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.804557] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.820535] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.820833] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.820855] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.820862] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.820882] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.820896] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742820895, replica_locations:[]}) [2024-09-13 13:02:22.820911] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.820932] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.820938] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.820961] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.821006] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550698123, cache_obj->added_lc()=false, cache_obj->get_object_id()=213, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.821965] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.822254] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.822278] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.822284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.822294] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.822306] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742822305, replica_locations:[]}) [2024-09-13 13:02:22.822355] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1440718, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.829530] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:22.829570] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:22.843518] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14022347162, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:22.847202] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.847464] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.847483] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.847495] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.847505] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.847519] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742847519, replica_locations:[]}) [2024-09-13 13:02:22.847533] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.847557] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.847565] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.847593] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.847634] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550724753, cache_obj->added_lc()=false, cache_obj->get_object_id()=214, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.848738] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.848948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.848970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.848989] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.849000] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.849012] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742849011, replica_locations:[]}) [2024-09-13 13:02:22.849061] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=384659, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.850965] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.850991] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742850958) [2024-09-13 13:02:22.850979] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A72-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742850520) [2024-09-13 13:02:22.851000] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742650899, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.851001] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A72-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742850520}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.851023] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.851029] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.851034] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742851011) [2024-09-13 13:02:22.851043] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.851049] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.851053] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742851041) [2024-09-13 13:02:22.854556] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.854776] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.854795] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.854801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.854812] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.854821] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742854820, replica_locations:[]}) [2024-09-13 13:02:22.854833] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.854853] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.854859] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.854886] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.854927] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550732044, cache_obj->added_lc()=false, cache_obj->get_object_id()=215, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.855353] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3C-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:22.855367] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3C-0-0] [lt=13][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203742854987], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:22.855806] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCC-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.855837] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.856024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.856039] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.856045] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.856073] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.856086] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742856085, replica_locations:[]}) [2024-09-13 13:02:22.856169] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.856132] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1406940, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.856276] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.856481] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCC-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:22.857802] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.857868] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.869616] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=7] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:22.872472] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=10] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.873930] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.874291] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:22.892042] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.892433] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=81][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.892477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=44][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.892486] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.892501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.892519] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742892518, replica_locations:[]}) [2024-09-13 13:02:22.892541] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.892578] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.892585] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.892621] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.892674] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550769788, cache_obj->added_lc()=false, cache_obj->get_object_id()=217, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:22.894015] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.894204] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.894230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.894237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.894248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.894264] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742894263, replica_locations:[]}) [2024-09-13 13:02:22.894322] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1368750, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.904273] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.904793] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.904817] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.904823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.904831] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.904842] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742904841, replica_locations:[]}) [2024-09-13 13:02:22.904858] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.904901] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.904911] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:22.904933] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:22.906007] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.906232] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.906250] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.906257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.906266] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.906276] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742906275, replica_locations:[]}) [2024-09-13 13:02:22.906321] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=327399, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.910361] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.911851] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.914345] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.915985] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=259][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.928582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.928841] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.928865] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.928872] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.928888] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.928901] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742928900, replica_locations:[]}) [2024-09-13 13:02:22.928916] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.928939] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.929957] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=43][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.930191] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.930211] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.930217] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.930224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.930233] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742930232, replica_locations:[]}) [2024-09-13 13:02:22.930281] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1332792, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.951089] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A73-0-0] [lt=25][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742950588) [2024-09-13 13:02:22.951108] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:22.951128] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203742951101) [2024-09-13 13:02:22.951120] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A73-0-0] [lt=29][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203742950588}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:22.951137] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742851010, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:22.951159] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.951165] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.951169] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742951146) [2024-09-13 13:02:22.951181] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.951185] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:22.951190] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203742951179) [2024-09-13 13:02:22.962565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.962897] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.962923] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.962946] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.962956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.962972] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742962971, replica_locations:[]}) [2024-09-13 13:02:22.962988] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.963013] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.964357] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=53][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.964547] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.964565] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.964571] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.964579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.964589] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742964588, replica_locations:[]}) [2024-09-13 13:02:22.964651] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=269070, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:22.965348] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.965502] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=49][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.965727] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.965749] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.965759] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.965774] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.965790] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742965789, replica_locations:[]}) [2024-09-13 13:02:22.965809] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:22.965827] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:22.966852] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.966935] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.967230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.967256] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:22.967266] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:22.967280] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:22.967294] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203742967293, replica_locations:[]}) [2024-09-13 13:02:22.967356] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1295717, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:22.973517] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:22.975041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.003671] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.003962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.003991] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.004001] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.004013] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.004033] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743004032, replica_locations:[]}) [2024-09-13 13:02:23.004131] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=95] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.004165] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.006047] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.006413] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.006460] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=45][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.006469] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.006484] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.006499] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743006498, replica_locations:[]}) [2024-09-13 13:02:23.006567] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1256506, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.021060] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=20][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:0, dropped:13, tid:19944}]) [2024-09-13 13:02:23.021445] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.021852] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.022039] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.022058] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.022065] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.022075] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.022090] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743022089, replica_locations:[]}) [2024-09-13 13:02:23.022112] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.022136] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.022146] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.022168] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.022229] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550899346, cache_obj->added_lc()=false, cache_obj->get_object_id()=221, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.022962] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.023390] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.023688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.023708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.023715] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.023724] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.023734] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743023733, replica_locations:[]}) [2024-09-13 13:02:23.023785] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=209935, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:23.033760] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.035433] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.043933] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.043943] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14020250010, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:23.044137] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1921) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=11] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-09-13 13:02:23.044159] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1462) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=19] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:23.044170] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1479) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=10] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:23.044183] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2678) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=10] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:23.044304] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.044328] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.044340] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.044355] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.044375] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743044373, replica_locations:[]}) [2024-09-13 13:02:23.044396] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.044427] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.044459] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=30][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.044502] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.044634] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550921746, cache_obj->added_lc()=false, cache_obj->get_object_id()=223, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.046008] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.046265] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.046290] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.046300] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.046315] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.046327] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743046326, replica_locations:[]}) [2024-09-13 13:02:23.046394] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1216678, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.047036] INFO [SQL.PC] dump_all_objs (ob_plan_cache.cpp:2397) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=7] Dumping All Cache Objs(alloc_obj_list.count()=3, alloc_obj_list=[{obj_id:206, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:2, added_to_lc:true, mem_used:157887}, {obj_id:224, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}, {obj_id:225, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}]) [2024-09-13 13:02:23.047072] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2686) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=33] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:23.051112] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A74-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743050659) [2024-09-13 13:02:23.051140] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A74-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743050659}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.051155] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.051186] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743051149) [2024-09-13 13:02:23.051199] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203742951144, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.051219] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.051228] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.051232] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743051208) [2024-09-13 13:02:23.069732] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:23.078595] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=58][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.080237] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=64][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.082031] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.082290] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.082310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.082317] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.082344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.082359] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743082358, replica_locations:[]}) [2024-09-13 13:02:23.082376] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.082400] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.082406] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.082427] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.082519] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=49][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550959636, cache_obj->added_lc()=false, cache_obj->get_object_id()=224, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.083995] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.084262] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.084280] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.084286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.084310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.084324] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743084323, replica_locations:[]}) [2024-09-13 13:02:23.084379] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=149342, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:23.084573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.084796] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.084809] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.084815] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.084821] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.084829] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743084828, replica_locations:[]}) [2024-09-13 13:02:23.084841] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.084858] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.084864] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.084892] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.084933] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6550962050, cache_obj->added_lc()=false, cache_obj->get_object_id()=225, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.085898] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.086114] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.086129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.086134] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.086144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.086153] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743086152, replica_locations:[]}) [2024-09-13 13:02:23.086192] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=39000, remain_us=1176880, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.093926] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.093952] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=5] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.093983] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=9] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.094412] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=27] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.094909] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.094937] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=15] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.095067] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.095416] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=5] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.095453] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=8] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.095899] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.096571] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.116353] INFO [PALF] log_loop_ (log_loop_thread.cpp:155) [20122][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=5] LogLoopThread round_cost_time(us)(round_cost_time=1) [2024-09-13 13:02:23.118476] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=20] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:23.119317] INFO [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:92) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] run rewrite rule refresh task(rule_mgr_->tenant_id_=1) [2024-09-13 13:02:23.119430] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18][errcode=0] server is initiating(server_id=0, local_seq=26, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:23.120784] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=23] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, table_name.ptr()="data_size:14, data:5F5F616C6C5F7379735F73746174", ret=-5019) [2024-09-13 13:02:23.120814] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=27][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, ret=-5019) [2024-09-13 13:02:23.120825] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=11][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_sys_stat, db_name=oceanbase) [2024-09-13 13:02:23.120835] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_sys_stat) [2024-09-13 13:02:23.120844] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:23.120850] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:23.120859] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_sys_stat' doesn't exist [2024-09-13 13:02:23.120865] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=5][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:23.120871] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:23.120902] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=30][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:23.120909] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:23.120916] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:23.120923] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:23.120930] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:23.120945] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:23.120958] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=11][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.120967] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.120972] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:23.120979] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:23.120987] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:23.120998] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=10][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.121016] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=15][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:23.121036] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=16][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:23.121043] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:23.121052] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=9][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:23.121066] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:23.121085] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=17][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.121092] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7D-0-0] [lt=8][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, aret=-5019, ret=-5019) [2024-09-13 13:02:23.121101] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:23.121109] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:23.121117] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:23.121128] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203743120452, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:23.121137] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:23.121148] WDIAG [SHARE] fetch_max_id (ob_max_id_fetcher.cpp:482) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute sql failed(sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:23.121215] WDIAG [SQL.QRR] fetch_max_rule_version (ob_udr_sql_service.cpp:141) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] failed to fetch max rule version(ret=-5019, tenant_id=1) [2024-09-13 13:02:23.121229] WDIAG [SQL.QRR] sync_rule_from_inner_table (ob_udr_mgr.cpp:251) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] failed to fetch max rule version(ret=-5019) [2024-09-13 13:02:23.121240] WDIAG [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:94) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] failed to sync rule from inner table(ret=-5019) [2024-09-13 13:02:23.125433] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.125706] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.125726] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.125733] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.125744] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.125757] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743125756, replica_locations:[]}) [2024-09-13 13:02:23.125773] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.125796] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.125817] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.125848] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.125900] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551003017, cache_obj->added_lc()=false, cache_obj->get_object_id()=227, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.126861] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.127086] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.127105] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.127114] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.127126] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.127139] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743127139, replica_locations:[]}) [2024-09-13 13:02:23.127192] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1135880, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.131293] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC78-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.133934] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=15] PNIO [ratelimit] time: 1726203743133933, bytes: 2842826, bw: 0.183848 MB/s, add_ts: 1002032, add_bytes: 193170 [2024-09-13 13:02:23.134324] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2189-0-0] [lt=42][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.134868] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB218D-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.135198] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB218E-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.135645] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2192-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.135885] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2193-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.136235] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2197-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.136495] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2198-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.136758] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.136826] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB219C-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.137078] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB219D-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.137465] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21A1-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.138236] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.143622] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.143931] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.143953] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.143961] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.143975] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.143993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743143991, replica_locations:[]}) [2024-09-13 13:02:23.144018] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.144047] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.144069] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.144096] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.144148] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551021263, cache_obj->added_lc()=false, cache_obj->get_object_id()=226, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.145379] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.145646] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.145665] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.145674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.145688] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.145712] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743145711, replica_locations:[]}) [2024-09-13 13:02:23.145778] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=87943, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:23.151209] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A75-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743150735) [2024-09-13 13:02:23.151221] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.151233] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.151239] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743151208) [2024-09-13 13:02:23.151231] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A75-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743150735}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.151250] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.151261] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743151247) [2024-09-13 13:02:23.151269] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743051206, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.151278] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.151282] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.151285] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743151276) [2024-09-13 13:02:23.157634] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.159388] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DA-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.166828] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=20] PNIO [ratelimit] time: 1726203743166826, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007620, add_bytes: 0 [2024-09-13 13:02:23.167413] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.167732] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.167750] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.167757] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.167766] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.167779] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743167778, replica_locations:[]}) [2024-09-13 13:02:23.167795] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.167819] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.167827] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.167851] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.167907] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551045024, cache_obj->added_lc()=false, cache_obj->get_object_id()=228, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.168912] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.169111] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.169127] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.169136] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.169143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.169153] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743169152, replica_locations:[]}) [2024-09-13 13:02:23.169204] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=41000, remain_us=1093869, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.191311] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782DD-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.195773] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.197207] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.206029] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.206376] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.206395] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.206402] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.206414] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.206456] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743206454, replica_locations:[]}) [2024-09-13 13:02:23.206473] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.206498] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.206504] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.206532] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.206588] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=17][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551083706, cache_obj->added_lc()=false, cache_obj->get_object_id()=229, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.207899] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.208130] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.208147] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.208157] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.208167] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.208179] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743208179, replica_locations:[]}) [2024-09-13 13:02:23.208231] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0] will sleep(sleep_us=25489, remain_us=25489, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203743233720) [2024-09-13 13:02:23.209478] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=29][errcode=0] server is initiating(server_id=0, local_seq=27, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:23.210391] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.210586] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=18] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:23.210609] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:23.210611] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.210617] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:23.210621] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.210624] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:23.210627] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.210631] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:23.210634] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.210636] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:23.210643] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743210643, replica_locations:[]}) [2024-09-13 13:02:23.210652] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=15][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:23.210658] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=5][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:23.210653] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.210665] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:23.210670] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:23.210670] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.210673] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:23.210676] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.210677] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:23.210682] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:23.210687] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:23.210697] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:23.210699] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.210704] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.210710] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.210717] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:23.210721] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=3][errcode=-5019] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, ret=-5019) [2024-09-13 13:02:23.210733] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=11][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:23.210736] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551087853, cache_obj->added_lc()=false, cache_obj->get_object_id()=230, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.210738] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=5][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.210748] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=8][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:23.210760] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:23.210765] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:23.210768] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:23.210782] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:23.210791] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.210796] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7D-0-0] [lt=4][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-09-13 13:02:23.210804] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:23.210809] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:23.210821] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:23.210826] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203743210311, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:23.210835] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:23.210840] WDIAG get_my_sql_result_ (ob_table_access_helper.h:435) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x2b07c6c55878, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, columns_str="zone") [2024-09-13 13:02:23.210859] WDIAG read_and_convert_to_values_ (ob_table_access_helper.h:332) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-5019] fail to get ObMySQLResult(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:23.210920] WDIAG [COORDINATOR] get_self_zone_name (table_accessor.cpp:634) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] get zone from __all_server failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", columns=0x2b07c6c55878, where_condition="where svr_ip='172.16.51.35' and svr_port=2882", zone_name_holder=) [2024-09-13 13:02:23.210946] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:567) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-5019] get self zone name failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:23.210953] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:576) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] zone name is empty(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:23.210961] WDIAG [COORDINATOR] refresh (ob_leader_coordinator.cpp:144) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] get all ls election reference info failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:23.210973] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:23.211527] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.211746] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.211759] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.211764] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.211772] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.211780] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743211779, replica_locations:[]}) [2024-09-13 13:02:23.211820] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1051253, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.214592] INFO [DETECT] record_summary_info_and_logout_when_necessary_ (ob_lcl_batch_sender_thread.cpp:203) [20240][T1_LCLSender][T1][Y0-0000000000000000-0-0] [lt=14] ObLCLBatchSenderThread periodic report summary info(duty_ratio_percentage=0, total_constructed_detector=0, total_destructed_detector=0, total_alived_detector=0, _lcl_op_interval=30000, lcl_msg_map_.count()=0, *this={this:0x2b07c25fe2b0, is_inited:true, is_running:true, total_record_time:5010000, over_night_times:0}) [2024-09-13 13:02:23.223995] INFO [STORAGE.TRANS] run1 (ob_xa_trans_heartbeat_worker.cpp:84) [20243][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=5] XA scheduler heartbeat task statistics(avg_time=1) [2024-09-13 13:02:23.225986] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:23.226023] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=21] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=14558208) [2024-09-13 13:02:23.226096] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:130) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=9] ====== checkpoint timer task ====== [2024-09-13 13:02:23.226134] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:193) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=20] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:23.226595] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:305) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=8] ====== traversal_flush timer task ====== [2024-09-13 13:02:23.226609] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:338) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=10] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:23.227168] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:66) [20231][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=13] report RowHolderMapper summary info(count=0, bkt_cnt=248) [2024-09-13 13:02:23.227950] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:39) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=9] ====== [emptytablet] empty shell timer task ======(GC_EMPTY_TABLET_SHELL_INTERVAL=5000000) [2024-09-13 13:02:23.227955] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:116) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=13] ====== [tabletchange] timer task ======(GC_CHECK_INTERVAL=5000000) [2024-09-13 13:02:23.227979] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:107) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=22] [emptytablet] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=1) [2024-09-13 13:02:23.227998] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:242) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=25] [tabletchange] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=1) [2024-09-13 13:02:23.228713] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=13] gc stale ls task succ [2024-09-13 13:02:23.229561] WDIAG [ARCHIVE] do_thread_task_ (ob_archive_sender.cpp:256) [20256][T1_ArcSender][T1][YB42AC103323-000621F920F60C7D-0-0] [lt=4][errcode=-4018] try free send task failed(ret=-4018) [2024-09-13 13:02:23.233147] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=13] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:23.233816] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=14][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203743233721, ctx_timeout_ts=1726203743233721, worker_timeout_ts=1726203743233720, default_timeout=1000000) [2024-09-13 13:02:23.233835] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=19][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:23.233842] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:23.233853] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.233864] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:23.233886] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.233891] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.233914] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.233954] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551111072, cache_obj->added_lc()=false, cache_obj->get_object_id()=231, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.234793] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203743233720, ctx_timeout_ts=1726203743233720, worker_timeout_ts=1726203743233720, default_timeout=1000000) [2024-09-13 13:02:23.234816] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=22][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:23.234822] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=6][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:23.234833] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=11][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.234841] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.234854] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:23.234890] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=1][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:23.234904] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.234909] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.234931] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=5] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:23.234944] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:23.234962] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=12][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:23.234972] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.234977] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=4] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000458) [2024-09-13 13:02:23.234985] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7E-0-0] [lt=7][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:23.234991] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.234998] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:23.235004] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:23.235011] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4012] query failed(ret=-4012, conn=0x2b07a13e06e0, start=1726203741234512, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.235020] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] read failed(ret=-4012) [2024-09-13 13:02:23.235026] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.235076] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551112179, cache_obj->added_lc()=false, cache_obj->get_object_id()=233, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.235104] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:104) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=6] tx gc loop thread is running(MTL_ID()=1) [2024-09-13 13:02:23.235119] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:111) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=14] try gc retain ctx [2024-09-13 13:02:23.235132] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:23.235140] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:23.235150] WDIAG [SHARE] get_snapshot_gc_scn (ob_global_stat_proxy.cpp:164) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:23.235158] WDIAG [STORAGE] get_global_info (ob_tenant_freeze_info_mgr.cpp:811) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4012] fail to get global info(ret=-4012, tenant_id=1) [2024-09-13 13:02:23.235168] WDIAG [STORAGE] try_update_info (ob_tenant_freeze_info_mgr.cpp:954) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] failed to get global info(ret=-4012) [2024-09-13 13:02:23.235173] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1008) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] fail to try update info(tmp_ret=-4012, tmp_ret="OB_TIMEOUT") [2024-09-13 13:02:23.236041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C81-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.236316] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.236333] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.236342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.236352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.236380] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=28, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:23.236936] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:23.236953] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:23.236960] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:23.236967] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:23.237300] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:23.237324] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:23.237332] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:23.237341] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:23.237364] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=22][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:23.237369] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:23.237377] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:23.237384] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:23.237389] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:23.237393] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:23.237397] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:23.237401] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:23.237406] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:23.237410] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:23.237418] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:23.237423] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=5][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.237444] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=19][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.237448] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:23.237452] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:23.237457] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:23.237464] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.237476] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=9][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:23.237489] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=10][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:23.237496] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:23.237499] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:23.237516] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:23.237525] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.237532] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:23.237544] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=12][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:23.237552] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:23.237557] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:23.237564] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203743237184, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:23.237570] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=6][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:23.237574] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:23.237614] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=7][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:23.237628] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=13][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:23.237636] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=8][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:23.237647] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=10][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:23.237657] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=8][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:23.237676] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=18][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:23.237684] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C81-0-0] [lt=8][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:23.244315] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14022347162, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:23.251256] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A76-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743250804) [2024-09-13 13:02:23.251286] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A76-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743250804}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.251296] WDIAG [PALF] convert_to_ts (scn.cpp:265) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4016] invalid scn should not convert to ts (val_=18446744073709551615) [2024-09-13 13:02:23.251308] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:541) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0}, server_version_delta=1726203743251294, in_cluster_service=false, cluster_version={val:18446744073709551615, v:3}, min_cluster_version={val:18446744073709551615, v:3}, max_cluster_version={val:18446744073709551615, v:3}, get_cluster_version_err=0, cluster_version_delta=-1, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=1000000, local_cluster_version={val:0, v:0}, local_cluster_delta=1726203743251294, force_self_check=true, weak_read_refresh_interval=100000) [2024-09-13 13:02:23.251332] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.251347] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743251328) [2024-09-13 13:02:23.251353] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743151274, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.251365] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:23.251372] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:23.251394] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.251402] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.251406] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743251383) [2024-09-13 13:02:23.254024] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.254286] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.254300] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.254306] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.254315] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.254329] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743254328, replica_locations:[]}) [2024-09-13 13:02:23.254342] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.254363] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.254371] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.254386] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.254425] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551131543, cache_obj->added_lc()=false, cache_obj->get_object_id()=232, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.255285] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.255681] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.255732] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.255747] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.255753] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.255759] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.255768] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743255767, replica_locations:[]}) [2024-09-13 13:02:23.255816] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=1007257, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.257046] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.269837] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=35] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:23.299096] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.299339] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.299358] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.299364] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.299375] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.299388] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743299387, replica_locations:[]}) [2024-09-13 13:02:23.299403] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.299425] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.299443] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.299476] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.299521] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551176638, cache_obj->added_lc()=false, cache_obj->get_object_id()=234, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.300569] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.300988] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.301007] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.301013] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.301024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.301036] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743301036, replica_locations:[]}) [2024-09-13 13:02:23.301090] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=961983, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.316558] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.317990] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.329646] INFO [ARCHIVE] do_thread_task_ (ob_archive_sender.cpp:262) [20256][T1_ArcSender][T1][YB42AC103323-000621F920F60C7D-0-0] [lt=17] ObArchiveSender is running(thread_index=0) [2024-09-13 13:02:23.330029] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:23.330042] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C97-0-0] [lt=20][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203743330002}) [2024-09-13 13:02:23.330060] INFO [STORAGE.TRANS] statistics (ob_gts_source.cpp:70) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=30] gts statistics(tenant_id=1, gts_rpc_cnt=0, get_gts_cache_cnt=9073, get_gts_with_stc_cnt=0, try_get_gts_cache_cnt=0, try_get_gts_with_stc_cnt=0, wait_gts_elapse_cnt=0, try_wait_gts_elapse_cnt=0) [2024-09-13 13:02:23.330069] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:23.345309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.345588] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.345609] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.345616] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.345625] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.345639] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743345637, replica_locations:[]}) [2024-09-13 13:02:23.345655] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.345685] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.345698] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.345724] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.345779] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551222894, cache_obj->added_lc()=false, cache_obj->get_object_id()=235, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.346890] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.347120] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.347136] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.347142] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.347150] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.347159] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743347158, replica_locations:[]}) [2024-09-13 13:02:23.347212] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=45000, remain_us=915860, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.348355] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=20] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:23.351274] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A77-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743350865) [2024-09-13 13:02:23.351303] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A77-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743350865}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.351327] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.351338] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.351345] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743351314) [2024-09-13 13:02:23.355870] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3D-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:23.355901] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3D-0-0] [lt=31][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203743355459], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:23.356286] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCD-0-0] [lt=13][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203743355931, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035249, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203743355520}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:23.356327] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCD-0-0] [lt=40][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.356900] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCD-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.378502] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.380222] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990057-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.392471] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.392837] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.392887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.392905] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.392916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.392933] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743392932, replica_locations:[]}) [2024-09-13 13:02:23.392975] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=39] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.393008] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.393020] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.393059] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.393117] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551270230, cache_obj->added_lc()=false, cache_obj->get_object_id()=236, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.394222] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.394499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.394517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.394526] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.394536] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.394549] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743394548, replica_locations:[]}) [2024-09-13 13:02:23.394612] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=868461, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.423536] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169005A-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.440846] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.441111] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.441133] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.441139] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.441148] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.441163] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743441162, replica_locations:[]}) [2024-09-13 13:02:23.441181] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.441200] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:23.441220] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.441228] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.441250] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.441304] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551318418, cache_obj->added_lc()=false, cache_obj->get_object_id()=237, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.442418] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.442653] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.442677] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.442687] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.442702] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.442718] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743442717, replica_locations:[]}) [2024-09-13 13:02:23.442775] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=820298, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.444673] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14022347162, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:23.445822] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=11][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:23.451376] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:23.451396] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.451409] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743451370) [2024-09-13 13:02:23.451417] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743251363, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.451414] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A78-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743450936) [2024-09-13 13:02:23.451442] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.451449] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.451455] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743451424) [2024-09-13 13:02:23.451469] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.451446] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A78-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743450936}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.451476] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.451479] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743451466) [2024-09-13 13:02:23.469931] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:23.487275] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=15][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:23.489996] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.490330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.490350] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.490357] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.490365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.490381] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743490380, replica_locations:[]}) [2024-09-13 13:02:23.490396] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.490419] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.490429] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.490463] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.490509] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551367624, cache_obj->added_lc()=false, cache_obj->get_object_id()=238, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.491586] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.491810] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.491834] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.491840] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.491848] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.491857] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743491856, replica_locations:[]}) [2024-09-13 13:02:23.491923] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=771150, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.540192] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.540744] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.540776] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.540787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.540824] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=34] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.540850] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743540848, replica_locations:[]}) [2024-09-13 13:02:23.540885] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.540917] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.540930] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.540969] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.541022] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551418138, cache_obj->added_lc()=false, cache_obj->get_object_id()=239, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.542068] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.542276] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.542298] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.542304] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.542313] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.542325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743542324, replica_locations:[]}) [2024-09-13 13:02:23.542381] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=720692, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.551485] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A79-0-0] [lt=30][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743551011) [2024-09-13 13:02:23.551514] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A79-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743551011}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.551532] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.551563] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743551525) [2024-09-13 13:02:23.551575] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743451424, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.551596] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.551605] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.551610] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743551584) [2024-09-13 13:02:23.591652] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.592043] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.592065] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.592071] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.592080] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.592093] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743592092, replica_locations:[]}) [2024-09-13 13:02:23.592109] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.592132] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.592138] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.592161] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.592210] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551469327, cache_obj->added_lc()=false, cache_obj->get_object_id()=240, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.593323] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=56][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.593633] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.593651] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.593657] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.593665] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.593678] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743593676, replica_locations:[]}) [2024-09-13 13:02:23.593736] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=669336, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.617917] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=37] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:23.621795] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=25][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1183, tid:20197}]) [2024-09-13 13:02:23.644051] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.644357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.644390] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.644401] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.644415] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.644453] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743644432, replica_locations:[]}) [2024-09-13 13:02:23.644478] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=41] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.644505] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:50, local_retry_times:50, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.644529] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.644539] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.644551] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.644562] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.644569] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.644598] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:23.644613] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.644674] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551521786, cache_obj->added_lc()=false, cache_obj->get_object_id()=241, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.645086] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14022347162, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:23.645967] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.646005] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=37][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.646198] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.646797] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.646828] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.646838] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.647120] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=278] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.647148] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743647147, replica_locations:[]}) [2024-09-13 13:02:23.647172] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.647323] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=149][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.647540] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=215][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.647561] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:23.647575] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:23.647584] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:23.647602] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:23.647616] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.647625] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.647635] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:23.647645] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:23.647653] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:23.647663] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.647677] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:23.647686] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:23.647696] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:23.647704] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:23.647712] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:23.647723] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:23.647738] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:23.647752] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.647765] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:23.647772] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:23.647785] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:23.647796] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=51, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.647823] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] will sleep(sleep_us=51000, remain_us=615250, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.651575] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7A-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743651073) [2024-09-13 13:02:23.651600] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.651614] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.651604] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7A-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743651073}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.651623] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743651584) [2024-09-13 13:02:23.651645] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.651659] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743651640) [2024-09-13 13:02:23.651667] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743551582, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.651682] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.651689] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.651692] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743651675) [2024-09-13 13:02:23.665007] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=15][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:23.665035] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=26][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=996850) [2024-09-13 13:02:23.665046] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=9][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:23.665056] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=8][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:23.665062] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20292][T1_L0_G0][T1][YB42AC103326-00062119ED978DF3-0-0] [lt=6][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:23.665223] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=6][errcode=0] server is initiating(server_id=0, local_seq=29, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:23.666089] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=14][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:23.670038] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:23.699123] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.699413] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.699465] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=50][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.699478] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.699496] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.699517] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743699516, replica_locations:[]}) [2024-09-13 13:02:23.699541] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.699569] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:51, local_retry_times:51, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.699592] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.699604] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.699621] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.699632] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.699640] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.699661] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:23.699676] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.699738] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551576851, cache_obj->added_lc()=false, cache_obj->get_object_id()=242, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.701012] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.701049] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=35][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.701213] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.701484] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.701515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.701528] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.701544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.701564] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743701562, replica_locations:[]}) [2024-09-13 13:02:23.701584] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.701600] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.701613] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.701630] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:23.701639] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:23.701648] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:23.701667] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:23.701682] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.701694] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.701705] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:23.701713] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:23.701721] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:23.701731] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.701744] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:23.701755] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:23.701763] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:23.701771] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:23.701778] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:23.701786] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:23.701803] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:23.701816] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.701828] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:23.701837] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:23.701849] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:23.701860] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=52, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.701901] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=29] will sleep(sleep_us=52000, remain_us=561172, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.720932] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=23][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:23.726101] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:23.726162] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=30] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=14558208) [2024-09-13 13:02:23.751642] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7B-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743751151) [2024-09-13 13:02:23.751686] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7B-0-0] [lt=38][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743751151}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.751705] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.751730] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743751696) [2024-09-13 13:02:23.751740] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743651673, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.751766] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.751782] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.751790] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743751749) [2024-09-13 13:02:23.754164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.754504] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.754529] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.754541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.754554] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.754571] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743754570, replica_locations:[]}) [2024-09-13 13:02:23.754592] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.754614] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:52, local_retry_times:52, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.754634] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.754643] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.754656] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.754664] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.754670] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.754707] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:23.754719] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.754775] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551631889, cache_obj->added_lc()=false, cache_obj->get_object_id()=243, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.756036] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.756072] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=34][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.756224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.756505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.756534] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.756545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.756557] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.756573] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743756572, replica_locations:[]}) [2024-09-13 13:02:23.756592] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.756607] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.756617] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.756636] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:23.756645] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:23.756654] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:23.756674] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:23.756689] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.756696] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.756706] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:23.756712] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:23.756719] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:23.756732] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.756746] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:23.756755] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:23.756762] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:23.756769] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:23.756776] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:23.756786] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:23.756800] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:23.756813] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.756822] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:23.756831] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:23.756840] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:23.756851] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=53, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.756892] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15] will sleep(sleep_us=53000, remain_us=506181, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.810204] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.810527] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.810556] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.810563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.810574] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.810589] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743810588, replica_locations:[]}) [2024-09-13 13:02:23.810611] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.810636] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:53, local_retry_times:53, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.810661] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.810673] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.810690] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.810700] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.810708] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.810729] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:23.810744] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.810798] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551687914, cache_obj->added_lc()=false, cache_obj->get_object_id()=244, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.811862] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.811901] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=37][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.812023] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.812647] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.812671] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.812683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.812701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.812717] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743812715, replica_locations:[]}) [2024-09-13 13:02:23.812739] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.812756] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.812769] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.812780] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:23.812788] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:23.812795] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:23.812808] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:23.812819] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.812824] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.812831] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:23.812836] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:23.812840] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:23.812848] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.812858] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:23.812866] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:23.812887] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:23.812897] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:23.812908] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:23.812916] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:23.812931] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:23.812944] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.812955] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:23.812967] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:23.812979] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:23.812990] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=54, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.813018] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16] will sleep(sleep_us=54000, remain_us=450055, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.830569] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.830608] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=37][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:23.830648] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:23.830659] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:23.830676] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:23.845514] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14022347162, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:23.851726] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7C-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743851226) [2024-09-13 13:02:23.851758] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7C-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743851226}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.851768] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.851797] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743851760) [2024-09-13 13:02:23.851814] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743751747, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.851846] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.851861] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.851869] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743851828) [2024-09-13 13:02:23.851926] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=49][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.851936] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.851945] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743851921) [2024-09-13 13:02:23.854135] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=41] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=8577, clean_start_pos=377487, clean_num=125829) [2024-09-13 13:02:23.856364] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3E-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:23.856386] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3E-0-0] [lt=21][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203743855933], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:23.856891] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCE-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.857683] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCE-0-0] [lt=24][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203743857358, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035264, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203743857311}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:23.857731] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCE-0-0] [lt=47][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:23.867318] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.867576] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.867603] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.867610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.867619] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.867636] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743867635, replica_locations:[]}) [2024-09-13 13:02:23.867719] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=80] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.867744] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:54, local_retry_times:54, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.867777] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=27][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.867790] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.867801] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.867808] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.867812] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.867845] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:23.867864] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.867926] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551745041, cache_obj->added_lc()=false, cache_obj->get_object_id()=245, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.869191] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.869251] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=58][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.869416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.869648] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.869667] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.869674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.869681] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.869691] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743869690, replica_locations:[]}) [2024-09-13 13:02:23.869705] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.869717] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.869723] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.869735] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:23.869740] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:23.869747] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:23.869758] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:23.869768] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.869773] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.869779] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:23.869783] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:23.869788] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:23.869794] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.869806] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:23.869813] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:23.869819] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:23.869825] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:23.869832] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:23.869839] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:23.869856] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:23.869869] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.869889] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:23.869897] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:23.869909] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:23.869920] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=55, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.869945] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] will sleep(sleep_us=55000, remain_us=393128, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.870147] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=30] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:23.872898] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.872936] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.873899] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=23] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:23.925246] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.925456] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.925489] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.925498] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.925513] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.925533] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743925531, replica_locations:[]}) [2024-09-13 13:02:23.925554] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.925577] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:55, local_retry_times:55, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.925594] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.925602] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.925616] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.925622] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.925627] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.925641] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:23.925654] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.925704] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551802820, cache_obj->added_lc()=false, cache_obj->get_object_id()=246, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.926666] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.926695] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.926857] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.927036] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.927080] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=41][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.927094] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.927109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.927127] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743927126, replica_locations:[]}) [2024-09-13 13:02:23.927142] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.927153] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:23.927160] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:23.927171] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:23.927180] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:23.927188] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:23.927201] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:23.927211] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.927221] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:23.927229] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:23.927236] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:23.927241] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:23.927251] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:23.927261] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:23.927265] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:23.927269] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:23.927276] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:23.927280] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:23.927288] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:23.927299] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:23.927308] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:23.927316] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:23.927324] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:23.927334] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:23.927339] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=56, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:23.927359] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] will sleep(sleep_us=56000, remain_us=335714, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:23.951814] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7D-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743951309) [2024-09-13 13:02:23.951849] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7D-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203743951309}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:23.951864] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:23.951897] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203743951856) [2024-09-13 13:02:23.951907] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743851826, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:23.951931] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.951945] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:23.951949] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203743951919) [2024-09-13 13:02:23.962391] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:23.983668] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.984187] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.984211] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.984218] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.984228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.984267] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743984265, replica_locations:[]}) [2024-09-13 13:02:23.984288] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:23.984311] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:56, local_retry_times:56, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:23.984350] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:23.984363] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:23.984376] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.984384] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:23.984388] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:23.984430] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:23.984515] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=28][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551861615, cache_obj->added_lc()=false, cache_obj->get_object_id()=247, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:23.985897] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:23.986093] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.986114] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:23.986121] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:23.986131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:23.986140] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203743986140, replica_locations:[]}) [2024-09-13 13:02:23.986195] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=276878, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:24.043492] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.043796] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.043820] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.043828] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.043838] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.043854] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744043853, replica_locations:[]}) [2024-09-13 13:02:24.043867] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.043901] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.043910] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.043931] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.043978] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551921094, cache_obj->added_lc()=false, cache_obj->get_object_id()=248, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.045048] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.045218] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.045240] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.045246] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.045255] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.045268] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744045267, replica_locations:[]}) [2024-09-13 13:02:24.045320] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=217752, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:24.051904] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.051910] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7E-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744051406) [2024-09-13 13:02:24.051924] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.051931] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744051888) [2024-09-13 13:02:24.051931] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7E-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744051406}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.051945] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:24.051966] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744051940) [2024-09-13 13:02:24.051977] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203743951917, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:24.051993] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.052002] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.052010] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744051990) [2024-09-13 13:02:24.054486] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14020250010, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:24.070255] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:24.088394] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=20][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.088430] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=34][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=1996581) [2024-09-13 13:02:24.088472] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=41][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:24.088482] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=9][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:24.088491] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20288][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=7][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:24.088471] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=41] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:24.088671] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=11][errcode=0] server is initiating(server_id=0, local_seq=30, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:24.089755] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.093597] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.093630] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=6] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.093644] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=10] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.094702] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=18] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.094740] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=12] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.095123] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.095422] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.095484] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=8] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.095920] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=9] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.103566] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.103911] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.103935] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.103942] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.103951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.103982] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744103981, replica_locations:[]}) [2024-09-13 13:02:24.103998] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.104025] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.104034] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.104073] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.104124] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6551981241, cache_obj->added_lc()=false, cache_obj->get_object_id()=249, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.105220] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.105373] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.105393] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.105399] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.105407] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.105417] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744105416, replica_locations:[]}) [2024-09-13 13:02:24.105499] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=157574, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:24.118568] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=19] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:24.131893] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC79-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:24.138952] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=17] PNIO [ratelimit] time: 1726203744138949, bytes: 2911188, bw: 0.064870 MB/s, add_ts: 1005016, add_bytes: 68362 [2024-09-13 13:02:24.151932] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7F-0-0] [lt=24][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744151475) [2024-09-13 13:02:24.151966] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A7F-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744151475}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.151997] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.152014] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.152022] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744151981) [2024-09-13 13:02:24.159255] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=30] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:24.164948] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.165189] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.165212] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.165219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.165231] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.165245] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744165244, replica_locations:[]}) [2024-09-13 13:02:24.165261] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.165287] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.165296] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.165317] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.165377] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552042493, cache_obj->added_lc()=false, cache_obj->get_object_id()=250, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.166478] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.166753] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.166774] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.166781] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.166789] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.166799] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744166798, replica_locations:[]}) [2024-09-13 13:02:24.166854] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=96218, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:24.174447] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=23] PNIO [ratelimit] time: 1726203744174445, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007619, add_bytes: 0 [2024-09-13 13:02:24.193328] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.202777] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C80-0-0] [lt=5] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.209753] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:24.226208] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=14] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:24.226254] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=23] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=14558208) [2024-09-13 13:02:24.227119] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.227294] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=17] ====== check clog disk timer task ====== [2024-09-13 13:02:24.227314] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=17] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:24.227328] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:24.227419] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.227445] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.227451] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.227463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.227477] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744227476, replica_locations:[]}) [2024-09-13 13:02:24.227493] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.227517] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.227526] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.227546] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.227593] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552104710, cache_obj->added_lc()=false, cache_obj->get_object_id()=251, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.228729] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.228782] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=9] gc stale ls task succ [2024-09-13 13:02:24.228965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.228983] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.228990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.228998] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.229008] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744229007, replica_locations:[]}) [2024-09-13 13:02:24.229071] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1] will sleep(sleep_us=34002, remain_us=34002, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203744263072) [2024-09-13 13:02:24.233201] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=14] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:24.233457] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.233973] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.234903] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.235575] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.235920] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.237130] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:24.237149] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:24.237159] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:24.237170] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:24.237920] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.238179] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.238201] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.238208] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.238219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.238250] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=31, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:24.239237] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:24.239258] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=18][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:24.239265] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:24.239273] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:24.239285] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=10][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:24.239290] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:24.239298] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:24.239306] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:24.239310] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:24.239315] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:24.239319] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:24.239327] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:24.239331] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:24.239335] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:24.239349] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:24.239356] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.239362] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.239372] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:24.239377] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:24.239383] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:24.239389] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.239408] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=16][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:24.239423] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:24.239431] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:24.239446] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=15][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:24.239472] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=9][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:24.239491] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=18][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.239496] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:24.239508] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:24.239513] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:24.239520] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:24.239526] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203744239075, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:24.239542] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=15][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:24.239547] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:24.239607] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=7][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:24.239618] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=10][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:24.239623] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:24.239628] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:24.239633] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:24.239644] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=10][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:24.239649] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C82-0-0] [lt=4][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:24.252042] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A80-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744251561) [2024-09-13 13:02:24.252049] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:24.252075] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744252041) [2024-09-13 13:02:24.252072] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A80-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744251561}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.252089] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203744051988, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:24.252105] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:24.252118] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:24.252158] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.252170] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.252177] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744252143) [2024-09-13 13:02:24.252192] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.252200] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.252206] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744252188) [2024-09-13 13:02:24.254834] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14020250010, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:24.257692] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=11] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:24.257715] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:24.257724] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:24.257731] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:24.257737] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:24.257741] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:24.257748] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:24.257752] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=3][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:24.257758] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=6][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:24.257762] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:24.257768] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=5][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:24.257772] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:24.257777] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:24.257780] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:24.257785] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:24.257788] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:24.257792] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:24.257803] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=6][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:24.257808] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.257814] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.257820] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=5][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:24.257824] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=3][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:24.257834] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=9][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:24.257839] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.257859] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=16][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:24.257871] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:24.257884] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=13][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:24.257888] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:24.257899] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:24.257908] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C7F-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.257914] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C7F-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:24.257922] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:24.257927] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:24.257934] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:24.257939] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203744257363, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:24.257949] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:24.257953] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:24.257965] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:238) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] failed too much times, reset blacklist [2024-09-13 13:02:24.257972] INFO [STORAGE.TRANS] print_stat_ (ob_black_list.cpp:398) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7] start to print blacklist info [2024-09-13 13:02:24.258016] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4] ls blacklist refresh finish(cost_time=1480) [2024-09-13 13:02:24.263168] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=13][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203744263072, ctx_timeout_ts=1726203744263072, worker_timeout_ts=1726203744263072, default_timeout=1000000) [2024-09-13 13:02:24.263193] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:24.263201] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:24.263211] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.263222] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:24.263238] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.263248] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.263274] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.263322] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552140438, cache_obj->added_lc()=false, cache_obj->get_object_id()=252, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.264226] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203744263072, ctx_timeout_ts=1726203744263072, worker_timeout_ts=1726203744263072, default_timeout=1000000) [2024-09-13 13:02:24.264251] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=24][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:24.264258] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:24.264266] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.264272] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.264288] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:24.264317] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:24.264333] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.264340] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.264371] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:24.264390] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=1][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:24.264401] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=5][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:24.264414] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.264461] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=7] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000558) [2024-09-13 13:02:24.264473] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:24.264488] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=12][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.264497] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:24.264504] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:24.264514] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:24.264529] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.264571] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552141688, cache_obj->added_lc()=false, cache_obj->get_object_id()=253, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.264636] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=11][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:24.264644] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:24.264651] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:24.264659] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=7][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:24.264669] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=9][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:24.264678] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=8][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:24.264684] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=6] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001616) [2024-09-13 13:02:24.264691] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:24.264698] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=5] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001634) [2024-09-13 13:02:24.264729] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=31][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:24.264736] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=6] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:24.264742] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C7F-0-0] [lt=6][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:24.264749] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:24.264758] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:24.264783] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=8] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:24.264808] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=21] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:24.267082] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.267493] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.267515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.267521] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.267529] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.267541] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.267547] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:24.267554] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:24.267558] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:24.267682] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.267889] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.267906] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.267911] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.267917] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.267926] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744267925, replica_locations:[]}) [2024-09-13 13:02:24.267941] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:24.267953] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:24.268075] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:24.268170] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:24.268187] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4638] [2024-09-13 13:02:24.268216] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.268273] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.268457] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268463] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268476] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268479] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268486] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.268489] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.268497] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.268500] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.268512] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744268511, replica_locations:[]}) [2024-09-13 13:02:24.268507] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.268553] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=46] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:24.268562] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:24.268567] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:24.268570] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1996252, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.268635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.268654] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.268858] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.268902] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.268895] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268912] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744268912, replica_locations:[]}) [2024-09-13 13:02:24.268921] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.268923] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.268932] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.268938] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.268944] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.268946] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.268954] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.268970] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.268967] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:24.268975] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:24.268982] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:24.269036] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552146121, cache_obj->added_lc()=false, cache_obj->get_object_id()=254, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.269066] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.269199] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.269214] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.269226] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.269239] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.269258] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=18][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.269271] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:24.269282] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:24.269292] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:24.269301] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:24.269313] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:24.269320] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:24.269768] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.269947] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.269962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.269968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.269975] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.269984] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744269983, replica_locations:[]}) [2024-09-13 13:02:24.270022] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1994800, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.270352] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=28] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:24.271230] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.271415] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.271430] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.271443] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.271449] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.271457] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744271456, replica_locations:[]}) [2024-09-13 13:02:24.271467] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.271482] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.271488] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.271501] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.271529] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552148650, cache_obj->added_lc()=false, cache_obj->get_object_id()=255, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.272246] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.272473] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.272492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.272498] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.272506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.272515] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744272514, replica_locations:[]}) [2024-09-13 13:02:24.272553] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1992270, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.274785] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.274967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.274984] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.274990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.274997] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.275008] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744275007, replica_locations:[]}) [2024-09-13 13:02:24.275025] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.275048] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.275059] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.275081] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.275112] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552152233, cache_obj->added_lc()=false, cache_obj->get_object_id()=256, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.275880] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.276105] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.276129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.276138] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.276147] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.276159] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744276158, replica_locations:[]}) [2024-09-13 13:02:24.276208] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=3000, remain_us=1988615, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.279432] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.279642] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.279664] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.279670] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.279677] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.279686] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744279685, replica_locations:[]}) [2024-09-13 13:02:24.279700] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.279723] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.279733] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.279748] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.279775] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552156894, cache_obj->added_lc()=false, cache_obj->get_object_id()=257, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.280584] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.280841] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.280861] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.280867] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.280882] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.280890] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744280890, replica_locations:[]}) [2024-09-13 13:02:24.280931] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1983892, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.285128] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.285376] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.285394] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.285400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.285406] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.285415] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744285414, replica_locations:[]}) [2024-09-13 13:02:24.285427] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.285454] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.285462] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.285503] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.285542] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552162661, cache_obj->added_lc()=false, cache_obj->get_object_id()=258, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.286354] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.286576] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.286598] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.286605] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.286612] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.286621] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744286620, replica_locations:[]}) [2024-09-13 13:02:24.286659] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1978163, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.291919] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.292154] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.292175] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.292181] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.292189] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.292200] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744292200, replica_locations:[]}) [2024-09-13 13:02:24.292214] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.292234] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.292243] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.292259] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.292295] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552169415, cache_obj->added_lc()=false, cache_obj->get_object_id()=259, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.293105] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.293422] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.293451] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.293457] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.293464] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.293476] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744293476, replica_locations:[]}) [2024-09-13 13:02:24.293514] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1971308, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.299712] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.300020] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.300036] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.300042] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.300048] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.300057] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744300056, replica_locations:[]}) [2024-09-13 13:02:24.300069] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.300088] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.300093] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.300123] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.300161] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552177280, cache_obj->added_lc()=false, cache_obj->get_object_id()=260, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.301043] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.301366] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.301389] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.301395] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.301404] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.301413] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744301412, replica_locations:[]}) [2024-09-13 13:02:24.301463] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1963359, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.305140] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=17] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.308698] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.308991] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.309012] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.309019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.309027] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.309039] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744309038, replica_locations:[]}) [2024-09-13 13:02:24.309057] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.309081] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.309090] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.309112] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.309156] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552186273, cache_obj->added_lc()=false, cache_obj->get_object_id()=261, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.310495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.310699] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.310729] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.310738] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.310747] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.310759] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744310758, replica_locations:[]}) [2024-09-13 13:02:24.310820] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1954002, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.311839] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.319106] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.319329] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.319357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.319371] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.319382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.319399] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744319398, replica_locations:[]}) [2024-09-13 13:02:24.319417] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.319470] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.319485] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.319533] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.319594] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552196707, cache_obj->added_lc()=false, cache_obj->get_object_id()=262, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.320768] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.320924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.320943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.320949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.320959] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.320971] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744320970, replica_locations:[]}) [2024-09-13 13:02:24.321018] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1943805, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.327596] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=28][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.330252] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.330461] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.330498] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.330504] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.330520] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.330534] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744330533, replica_locations:[]}) [2024-09-13 13:02:24.330548] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.330570] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.330579] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.330598] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.330642] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552207759, cache_obj->added_lc()=false, cache_obj->get_object_id()=263, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.331176] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:24.331229] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:24.331244] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=14] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:24.331252] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:24.331254] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60C9D-0-0] [lt=14][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203744331201}) [2024-09-13 13:02:24.331632] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.331912] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.331932] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.331939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.331946] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.331954] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744331953, replica_locations:[]}) [2024-09-13 13:02:24.331997] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1932825, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.342228] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.342515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.342536] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.342545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.342562] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.342579] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744342578, replica_locations:[]}) [2024-09-13 13:02:24.342736] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=155] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.342763] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.342774] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.342809] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.342870] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552219983, cache_obj->added_lc()=false, cache_obj->get_object_id()=264, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.343899] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.344079] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.344101] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.344111] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.344125] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.344140] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744344139, replica_locations:[]}) [2024-09-13 13:02:24.344221] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1920601, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.348452] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=21] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:24.352260] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:24.352287] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744352254) [2024-09-13 13:02:24.352299] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203744252102, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:24.352325] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.352340] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.352347] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744352309) [2024-09-13 13:02:24.355493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.355738] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.355756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.355765] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.355784] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.355799] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744355798, replica_locations:[]}) [2024-09-13 13:02:24.355818] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.355847] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.355888] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.355933] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.356007] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552233120, cache_obj->added_lc()=false, cache_obj->get_object_id()=265, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.356863] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3F-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:24.356895] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B3F-0-0] [lt=31][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203744356394], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:24.357164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.357367] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCF-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:24.357373] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.357389] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.357398] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.357408] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.357419] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744357418, replica_locations:[]}) [2024-09-13 13:02:24.357493] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=12000, remain_us=1907330, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.357853] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DCF-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:24.369836] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.370100] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.370121] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.370131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.370146] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.370165] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744370164, replica_locations:[]}) [2024-09-13 13:02:24.370186] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.370216] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.370228] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.370266] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.370323] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552247436, cache_obj->added_lc()=false, cache_obj->get_object_id()=266, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.371389] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.371910] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.371936] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.371944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.371954] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.371967] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744371965, replica_locations:[]}) [2024-09-13 13:02:24.372059] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=13000, remain_us=1892764, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.385173] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=24] ====== tenant freeze timer task ====== [2024-09-13 13:02:24.385219] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=32][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:24.385506] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.385943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.385965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.385972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.385981] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.385997] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744385996, replica_locations:[]}) [2024-09-13 13:02:24.386012] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.386037] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.386046] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.386066] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.386112] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552263229, cache_obj->added_lc()=false, cache_obj->get_object_id()=267, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.387201] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.387482] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.387503] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.387512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.387525] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.387536] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744387536, replica_locations:[]}) [2024-09-13 13:02:24.387602] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1877221, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.401853] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.402161] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.402180] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.402186] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.402194] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.402206] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744402205, replica_locations:[]}) [2024-09-13 13:02:24.402217] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.402241] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.402251] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.402289] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.402335] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552279451, cache_obj->added_lc()=false, cache_obj->get_object_id()=268, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.403380] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=39][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.403825] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.403847] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.403853] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.403861] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.403883] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744403883, replica_locations:[]}) [2024-09-13 13:02:24.403937] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1860886, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.419154] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.419649] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.419668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.419674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.419683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.419699] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744419698, replica_locations:[]}) [2024-09-13 13:02:24.419713] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.419737] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.419746] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.419766] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.419811] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552296928, cache_obj->added_lc()=false, cache_obj->get_object_id()=269, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.420802] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.421190] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.421208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.421214] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.421222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.421234] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744421233, replica_locations:[]}) [2024-09-13 13:02:24.421301] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1843522, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.425698] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169005B-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.437536] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.438034] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.438057] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.438064] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.438077] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.438093] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744438092, replica_locations:[]}) [2024-09-13 13:02:24.438109] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.438134] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.438194] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.438223] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.438273] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552315389, cache_obj->added_lc()=false, cache_obj->get_object_id()=270, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.439267] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.439894] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.439916] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.439922] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.439933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.439946] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744439945, replica_locations:[]}) [2024-09-13 13:02:24.439998] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1824824, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.452170] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A81-0-0] [lt=19][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744451700) [2024-09-13 13:02:24.452202] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A81-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744451700}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.452236] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.452257] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.452270] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744452218) [2024-09-13 13:02:24.455160] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:24.457205] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.457702] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.457724] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.457733] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.457746] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.457765] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744457764, replica_locations:[]}) [2024-09-13 13:02:24.457780] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.457802] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.457807] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.457838] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.457901] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552335015, cache_obj->added_lc()=false, cache_obj->get_object_id()=271, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.458968] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.459333] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.459351] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.459357] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.459366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.459375] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744459374, replica_locations:[]}) [2024-09-13 13:02:24.459424] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1805399, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.462835] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.463320] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.464068] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.464911] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.465197] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.470464] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=12] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:24.477666] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.477970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.477998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.478004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.478017] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.478029] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744478028, replica_locations:[]}) [2024-09-13 13:02:24.478044] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.478070] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.478079] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.478100] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.478166] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552355262, cache_obj->added_lc()=false, cache_obj->get_object_id()=272, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.479217] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.479480] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.479498] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.479504] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.479513] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.479525] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744479524, replica_locations:[]}) [2024-09-13 13:02:24.479578] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1785245, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.498828] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.499194] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.499218] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.499227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.499240] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.499259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744499258, replica_locations:[]}) [2024-09-13 13:02:24.499280] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.499311] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.499320] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.499361] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.499450] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552376550, cache_obj->added_lc()=false, cache_obj->get_object_id()=273, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.500510] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.500815] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.500839] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.500849] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.500861] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.500884] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744500873, replica_locations:[]}) [2024-09-13 13:02:24.500946] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1763876, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.521210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.521580] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.521605] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.521615] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.521626] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.521641] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744521640, replica_locations:[]}) [2024-09-13 13:02:24.521664] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.521697] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.521709] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.521734] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.521792] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552398905, cache_obj->added_lc()=false, cache_obj->get_object_id()=274, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.522915] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.523192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.523213] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.523223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.523234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.523246] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744523245, replica_locations:[]}) [2024-09-13 13:02:24.523312] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1741510, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.544600] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.544974] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.544997] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.545007] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.545018] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.545034] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744545033, replica_locations:[]}) [2024-09-13 13:02:24.545049] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.545076] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.545085] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.545110] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.545160] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552422275, cache_obj->added_lc()=false, cache_obj->get_object_id()=275, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.546255] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.546677] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.546698] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.546705] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.546716] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.546726] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744546725, replica_locations:[]}) [2024-09-13 13:02:24.546782] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1718041, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.552217] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A82-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744551782) [2024-09-13 13:02:24.552244] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A82-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744551782}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.552276] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:24.552299] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:24.552353] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744552267) [2024-09-13 13:02:24.552368] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203744352308, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:24.552396] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.552425] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.552452] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744552381) [2024-09-13 13:02:24.569019] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.569369] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.569393] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.569400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.569411] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.569427] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744569425, replica_locations:[]}) [2024-09-13 13:02:24.569452] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.569478] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.569484] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.569513] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.569563] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552446679, cache_obj->added_lc()=false, cache_obj->get_object_id()=276, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.570681] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.571015] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.571037] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.571043] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.571052] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.571064] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744571063, replica_locations:[]}) [2024-09-13 13:02:24.571116] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1693706, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.594337] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.594691] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.594714] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.594721] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.594730] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.594743] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744594742, replica_locations:[]}) [2024-09-13 13:02:24.594758] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.594784] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.594793] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.594822] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.594871] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552471987, cache_obj->added_lc()=false, cache_obj->get_object_id()=277, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.595960] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.596247] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.596269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.596276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.596286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.596298] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744596298, replica_locations:[]}) [2024-09-13 13:02:24.596350] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1668472, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.618887] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=42] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:24.620579] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.620954] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.620976] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.620982] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.620993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.621008] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744621007, replica_locations:[]}) [2024-09-13 13:02:24.621023] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.621046] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.621055] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.621078] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.621125] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552498242, cache_obj->added_lc()=false, cache_obj->get_object_id()=278, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.622112] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.622381] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.622400] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.622407] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.622414] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.622423] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744622422, replica_locations:[]}) [2024-09-13 13:02:24.622495] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1642327, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.647721] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.648073] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.648098] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.648105] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.648116] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.648130] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744648130, replica_locations:[]}) [2024-09-13 13:02:24.648146] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.648169] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.648179] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.648210] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.648257] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552525375, cache_obj->added_lc()=false, cache_obj->get_object_id()=279, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.649309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.649550] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.649570] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.649577] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.649587] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.649598] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744649598, replica_locations:[]}) [2024-09-13 13:02:24.649650] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1615172, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.652347] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.652368] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.652376] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744652332) [2024-09-13 13:02:24.652375] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A83-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744651848) [2024-09-13 13:02:24.652402] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A83-0-0] [lt=17][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744651848}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.652416] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:24.652432] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744652410) [2024-09-13 13:02:24.652458] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203744552379, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:24.652478] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.652485] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.652489] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744652473) [2024-09-13 13:02:24.655918] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:24.670568] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=28] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:24.676774] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.677007] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.677054] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.677062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.677072] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.677085] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744677084, replica_locations:[]}) [2024-09-13 13:02:24.677102] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.677128] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.677137] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.677158] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.677229] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552554344, cache_obj->added_lc()=false, cache_obj->get_object_id()=280, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.678757] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.679722] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.679768] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.679781] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.679794] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.679813] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744679811, replica_locations:[]}) [2024-09-13 13:02:24.679978] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=27000, remain_us=1584845, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.707276] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.707513] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.707540] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.707547] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.707560] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.707573] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744707572, replica_locations:[]}) [2024-09-13 13:02:24.707589] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.707616] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.707626] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.707658] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.707710] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552584826, cache_obj->added_lc()=false, cache_obj->get_object_id()=281, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.708821] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.709077] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.709094] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.709100] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.709108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.709120] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744709119, replica_locations:[]}) [2024-09-13 13:02:24.709185] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1555638, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.718367] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=21][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.723201] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=23][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1118, tid:19944}]) [2024-09-13 13:02:24.723425] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=12][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.723462] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=36][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=1056294) [2024-09-13 13:02:24.723474] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=11][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:24.723483] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=8][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:24.723491] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20292][T1_L0_G0][T1][YB42AC103326-00062119EC0A1188-0-0] [lt=7][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:24.723606] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=32, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:24.724391] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=10][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:24.726312] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:24.726350] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:24.737427] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.742022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.742048] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.742055] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.742067] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.742085] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744742083, replica_locations:[]}) [2024-09-13 13:02:24.742101] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.742121] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:28, local_retry_times:28, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.742140] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.742160] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.742169] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.742177] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.742183] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.742199] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:24.742210] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.742261] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552619377, cache_obj->added_lc()=false, cache_obj->get_object_id()=282, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.743174] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.743199] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.743348] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.743587] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.743599] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.743608] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.743618] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.743629] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744743628, replica_locations:[]}) [2024-09-13 13:02:24.743642] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.743651] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.743660] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.743672] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:24.743679] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:24.743687] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:24.743700] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:24.743711] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.743718] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.743727] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:24.743734] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:24.743740] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:24.743749] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.743757] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:24.743762] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:24.743769] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:24.743775] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:24.743783] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:24.743790] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:24.743803] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:24.743811] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.743819] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:24.743827] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:24.743835] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:24.743842] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=29, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.743860] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=29000, remain_us=1520963, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.752476] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.752493] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.752500] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744752461) [2024-09-13 13:02:24.773128] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.773429] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.773458] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.773465] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.773476] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.773490] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744773489, replica_locations:[]}) [2024-09-13 13:02:24.773505] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.773522] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:29, local_retry_times:29, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.773540] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.773549] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.773560] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.773567] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.773574] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.773598] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:24.773609] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.773652] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552650769, cache_obj->added_lc()=false, cache_obj->get_object_id()=283, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.774541] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.774565] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.774677] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.774984] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.774997] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.775003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.775010] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.775022] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744775021, replica_locations:[]}) [2024-09-13 13:02:24.775035] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.775044] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.775053] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.775064] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:24.775070] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:24.775077] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:24.775091] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:24.775101] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.775109] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.775118] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:24.775125] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:24.775131] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:24.775141] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.775150] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:24.775157] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:24.775164] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:24.775171] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:24.775178] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:24.775183] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:24.775194] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:24.775202] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.775210] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:24.775217] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:24.775225] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:24.775232] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=30, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.775250] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=30000, remain_us=1489573, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.805647] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.805917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.805939] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.805952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.805967] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.805987] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744805986, replica_locations:[]}) [2024-09-13 13:02:24.806002] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.806019] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:30, local_retry_times:30, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.806037] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.806047] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.806058] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.806065] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.806072] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.806087] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:24.806098] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.806147] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552683263, cache_obj->added_lc()=false, cache_obj->get_object_id()=284, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.807188] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.807215] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.807351] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.807575] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.807592] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.807598] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.807609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.807622] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744807621, replica_locations:[]}) [2024-09-13 13:02:24.807635] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.807645] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.807654] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.807666] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:24.807674] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:24.807682] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:24.807696] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:24.807709] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.807717] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.807726] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:24.807730] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:24.807738] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:24.807748] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.807757] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:24.807764] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:24.807771] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:24.807778] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:24.807785] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:24.807792] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:24.807806] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:24.807815] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.807823] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:24.807830] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:24.807838] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:24.807846] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=31, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.807865] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=31000, remain_us=1456958, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.831665] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.831698] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=32][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:24.831733] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:24.831744] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:24.831765] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:24.839188] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.839446] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.839469] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.839480] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.839492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.839507] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744839506, replica_locations:[]}) [2024-09-13 13:02:24.839524] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.839543] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:31, local_retry_times:31, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.839572] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.839579] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.839590] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.839597] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.839603] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.839619] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:24.839630] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.839683] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552716794, cache_obj->added_lc()=false, cache_obj->get_object_id()=285, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.840759] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.840788] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.840964] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.841205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.841227] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.841236] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.841247] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.841262] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744841261, replica_locations:[]}) [2024-09-13 13:02:24.841276] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.841286] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.841295] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.841307] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:24.841315] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:24.841323] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:24.841337] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:24.841348] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.841356] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.841364] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:24.841372] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:24.841379] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:24.841389] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.841399] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:24.841406] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:24.841413] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:24.841420] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:24.841428] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:24.841444] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:24.841460] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:24.841476] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.841484] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:24.841492] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:24.841501] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:24.841510] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=32, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.841531] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=32000, remain_us=1423292, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.852471] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A84-0-0] [lt=40][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744851990) [2024-09-13 13:02:24.852503] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A84-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744851990}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.852530] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:24.852555] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744852523) [2024-09-13 13:02:24.852570] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203744652471, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:24.852599] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.852614] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.852622] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744852584) [2024-09-13 13:02:24.856246] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:24.857489] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B40-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:24.857510] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B40-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203744856934], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:24.858034] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD0-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203744857655, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035290, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203744857007}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:24.858084] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD0-0-0] [lt=49][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:24.858763] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD0-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:24.872811] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=8] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.873861] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.873887] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=23] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.873975] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:24.874162] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.874189] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.874203] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.874219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.874240] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744874238, replica_locations:[]}) [2024-09-13 13:02:24.874263] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.874289] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:32, local_retry_times:32, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.874313] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.874321] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.874337] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.874347] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.874357] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.874394] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:24.874410] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.874486] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552751600, cache_obj->added_lc()=false, cache_obj->get_object_id()=286, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.874756] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=299] Cache replace map node details(ret=0, replace_node_count=0, replace_time=3812, replace_start_pos=251656, replace_num=62914) [2024-09-13 13:02:24.874774] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:24.875677] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.875730] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=51][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.875906] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.876101] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.876120] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.876130] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.876144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.876160] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744876159, replica_locations:[]}) [2024-09-13 13:02:24.876179] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.876194] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.876207] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.876238] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:24.876249] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:24.876261] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:24.876281] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:24.876296] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.876307] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.876319] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:24.876326] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:24.876336] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:24.876348] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.876360] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:24.876371] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:24.876381] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:24.876391] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:24.876401] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:24.876411] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:24.876428] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:24.876463] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=32][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.876475] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:24.876487] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:24.876499] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:24.876509] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=33, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.876534] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] will sleep(sleep_us=33000, remain_us=1388289, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.909809] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.910102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.910127] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.910137] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.910148] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.910164] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744910163, replica_locations:[]}) [2024-09-13 13:02:24.910180] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.910199] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:33, local_retry_times:33, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.910218] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.910227] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.910238] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.910243] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.910246] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.910261] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:24.910272] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.910319] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552787435, cache_obj->added_lc()=false, cache_obj->get_object_id()=287, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.911344] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.911369] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.911502] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.912018] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.912034] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.912039] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.912049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.912061] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744912060, replica_locations:[]}) [2024-09-13 13:02:24.912075] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.912086] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:24.912095] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:24.912106] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:24.912114] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:24.912122] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:24.912135] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:24.912145] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.912153] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:24.912161] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:24.912168] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:24.912175] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:24.912184] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:24.912193] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:24.912200] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:24.912206] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:24.912213] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:24.912218] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:24.912225] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:24.912239] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:24.912248] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:24.912254] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:24.912258] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:24.912266] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:24.912273] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=34, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:24.912293] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] will sleep(sleep_us=34000, remain_us=1352530, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.946590] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.947025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.947050] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.947064] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.947109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=43] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.947126] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744947125, replica_locations:[]}) [2024-09-13 13:02:24.947142] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.947164] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:34, local_retry_times:34, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:24.947181] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.947190] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.947201] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.947209] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:24.947215] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:24.947237] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.947285] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552824401, cache_obj->added_lc()=false, cache_obj->get_object_id()=288, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.948358] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.948553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.948570] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.948580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.948590] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.948602] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744948601, replica_locations:[]}) [2024-09-13 13:02:24.948654] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=35000, remain_us=1316168, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:24.952500] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A85-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203744952059) [2024-09-13 13:02:24.952527] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A85-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203744952059}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:24.952557] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.952574] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:24.952583] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203744952543) [2024-09-13 13:02:24.983950] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.984202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.984232] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.984246] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.984262] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.984278] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744984277, replica_locations:[]}) [2024-09-13 13:02:24.984295] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:24.984320] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:24.984329] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:24.984352] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:24.984400] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552861516, cache_obj->added_lc()=false, cache_obj->get_object_id()=289, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:24.985543] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:24.986129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.986150] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:24.986164] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:24.986178] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:24.986195] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203744986194, replica_locations:[]}) [2024-09-13 13:02:24.986262] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1278561, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.022582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.022959] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.022991] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.023005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.023022] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.023043] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745023041, replica_locations:[]}) [2024-09-13 13:02:25.023065] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.023099] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.023111] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.023153] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.023216] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552900329, cache_obj->added_lc()=false, cache_obj->get_object_id()=290, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.024637] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.024909] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.024935] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.024949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.024964] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.024980] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745024979, replica_locations:[]}) [2024-09-13 13:02:25.025088] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1239735, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.052619] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.052672] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=35][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745052609) [2024-09-13 13:02:25.052687] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203744852582, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.052710] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.052719] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.052728] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745052698) [2024-09-13 13:02:25.052776] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A86-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745052182) [2024-09-13 13:02:25.052826] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.052836] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.052812] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A86-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203745052182}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:25.052840] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745052821) [2024-09-13 13:02:25.056608] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:25.062409] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.062727] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.062756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.062771] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.062787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.062808] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745062807, replica_locations:[]}) [2024-09-13 13:02:25.062831] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.062859] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.062872] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.062915] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.062976] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552940089, cache_obj->added_lc()=false, cache_obj->get_object_id()=291, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.064407] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=55][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.064653] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.064679] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.064693] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.064709] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.064726] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745064725, replica_locations:[]}) [2024-09-13 13:02:25.064799] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1200024, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.074865] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=9] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:25.092904] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=9] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.092929] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=7] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.092891] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=22] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.094080] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=45] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.094658] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=7] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.094714] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.095255] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=30] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.095329] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.095894] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.103131] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.103461] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.103493] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.103506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.103523] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.103545] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745103544, replica_locations:[]}) [2024-09-13 13:02:25.103567] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.103602] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.103615] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.103657] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.103720] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6552980832, cache_obj->added_lc()=false, cache_obj->get_object_id()=292, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.105190] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.105569] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.105613] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=42][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.105627] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.105643] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.105661] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745105660, replica_locations:[]}) [2024-09-13 13:02:25.105735] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1159088, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.118670] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=30] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:25.132570] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC7A-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.138634] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21A3-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.139426] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21A7-0-0] [lt=26][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.139933] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21A8-0-0] [lt=40][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.140681] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=28] PNIO [ratelimit] time: 1726203745140680, bytes: 3034666, bw: 0.117554 MB/s, add_ts: 1001731, add_bytes: 123478 [2024-09-13 13:02:25.140705] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21AC-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.141080] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21AD-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.141686] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21B1-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.142093] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21B2-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.142817] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21B6-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.143153] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21B7-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.143643] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21BB-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.145001] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.145246] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.145277] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.145292] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.145309] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.145331] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745145329, replica_locations:[]}) [2024-09-13 13:02:25.145352] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.145386] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.145399] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.145428] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.145507] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553022619, cache_obj->added_lc()=false, cache_obj->get_object_id()=293, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.147015] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.147395] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.147421] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.147451] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.147466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.147484] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745147483, replica_locations:[]}) [2024-09-13 13:02:25.147559] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=40000, remain_us=1117263, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.152903] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.152933] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745152893) [2024-09-13 13:02:25.152949] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745052698, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.152972] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.152982] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.152993] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745152957) [2024-09-13 13:02:25.182059] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=22] PNIO [ratelimit] time: 1726203745182055, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007610, add_bytes: 0 [2024-09-13 13:02:25.187812] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.188202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.188233] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.188248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.188260] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.188277] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745188276, replica_locations:[]}) [2024-09-13 13:02:25.188292] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.188318] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.188327] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.188357] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.188406] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553065522, cache_obj->added_lc()=false, cache_obj->get_object_id()=294, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.189588] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.189856] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.189881] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.189888] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.189899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.189911] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745189910, replica_locations:[]}) [2024-09-13 13:02:25.189966] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=41000, remain_us=1074857, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.195369] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782DF-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.210508] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=36] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:25.226408] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=16] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:25.226472] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=25] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:25.228840] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=16] gc stale ls task succ [2024-09-13 13:02:25.231230] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.231622] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.231650] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.231663] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.231714] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=48] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.231733] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745231732, replica_locations:[]}) [2024-09-13 13:02:25.231755] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.231785] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.231797] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.231826] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.231893] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553108998, cache_obj->added_lc()=false, cache_obj->get_object_id()=295, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.233259] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.233317] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=18] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:25.233554] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.233581] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.233594] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.233609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.233624] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745233623, replica_locations:[]}) [2024-09-13 13:02:25.233701] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1031122, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.237325] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:25.237346] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:25.237356] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:25.237366] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:25.239925] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C83-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.240235] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.240266] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.240276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.240287] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.240320] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=33, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:25.241362] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:25.241383] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=19][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:25.241394] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=10][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:25.241404] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:25.241413] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:25.241421] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:25.241430] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:25.241452] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=21][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:25.241459] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:25.241464] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:25.241471] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:25.241478] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:25.241485] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:25.241492] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:25.241505] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:25.241513] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:25.241522] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:25.241527] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:25.241535] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:25.241546] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=10][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:25.241553] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:25.241568] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:25.241582] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:25.241589] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:25.241596] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:25.241610] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:25.241619] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.241626] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:25.241634] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:25.241642] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:25.241653] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:25.241661] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203745241189, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:25.241667] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:25.241672] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:25.241725] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=5][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:25.241734] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=8][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:25.241743] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=8][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:25.241750] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:25.241758] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:25.241767] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=8][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:25.241774] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C83-0-0] [lt=6][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:25.252907] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A87-0-0] [lt=29][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745252323) [2024-09-13 13:02:25.252941] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.252955] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.252941] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A87-0-0] [lt=32][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203745252323}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:25.252962] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745252924) [2024-09-13 13:02:25.252976] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.252993] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745252972) [2024-09-13 13:02:25.253004] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745152956, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.253015] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:25.253025] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:25.253040] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.253047] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.253051] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745253037) [2024-09-13 13:02:25.257061] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=30] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:25.274981] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=42] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:25.276031] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.276374] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.276398] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.276408] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.276419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.276456] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745276455, replica_locations:[]}) [2024-09-13 13:02:25.276474] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.276494] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:25.276513] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.276522] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.276556] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.276611] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553153724, cache_obj->added_lc()=false, cache_obj->get_object_id()=296, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.277789] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.278081] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.278102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.278108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.278119] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.278131] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745278131, replica_locations:[]}) [2024-09-13 13:02:25.278187] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=43000, remain_us=986635, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.321473] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.321843] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.321873] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.321896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.321914] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.321967] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745321965, replica_locations:[]}) [2024-09-13 13:02:25.321990] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.322022] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.322034] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.322060] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.322122] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553199235, cache_obj->added_lc()=false, cache_obj->get_object_id()=297, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.323269] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.323558] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.323578] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.323587] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.323598] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.323611] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745323610, replica_locations:[]}) [2024-09-13 13:02:25.323668] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=941154, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.332329] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=15][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:25.332405] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:25.332425] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:25.332460] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CA2-0-0] [lt=30][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203745332373}) [2024-09-13 13:02:25.332741] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=14] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:25.348549] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=20] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:25.353013] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.353033] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.353040] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745352995) [2024-09-13 13:02:25.357902] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B41-0-0] [lt=16] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:25.357924] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B41-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203745357432], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:25.358398] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD1-0-0] [lt=20][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203745358014, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035329, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203745357518}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:25.358445] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD1-0-0] [lt=46][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.358899] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD1-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.367900] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.368355] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.368377] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.368383] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.368395] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.368408] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745368407, replica_locations:[]}) [2024-09-13 13:02:25.368425] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.368473] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.368484] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.368514] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.368570] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553245686, cache_obj->added_lc()=false, cache_obj->get_object_id()=298, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.369594] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.369989] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.370013] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.370024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.370034] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.370048] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745370047, replica_locations:[]}) [2024-09-13 13:02:25.370105] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=45000, remain_us=894718, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.415371] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.415917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.415941] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.415955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.415972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.415993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745415991, replica_locations:[]}) [2024-09-13 13:02:25.416015] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.416046] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.416066] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.416107] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.416168] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553293281, cache_obj->added_lc()=false, cache_obj->get_object_id()=299, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.417683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.418108] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.418129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.418142] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.418157] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.418175] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745418173, replica_locations:[]}) [2024-09-13 13:02:25.418244] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=46000, remain_us=846578, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.428494] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169005C-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.452950] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A88-0-0] [lt=31][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745452457) [2024-09-13 13:02:25.452982] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A88-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203745452457}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:25.453002] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.453034] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745452994) [2024-09-13 13:02:25.453047] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745253013, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.453072] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.453081] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.453088] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745453056) [2024-09-13 13:02:25.457374] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=46] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:25.461237] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=36][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:25.464511] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.464993] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.465022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.465033] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.465051] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.465067] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745465066, replica_locations:[]}) [2024-09-13 13:02:25.465093] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.465123] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.465135] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.465164] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.465223] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553342336, cache_obj->added_lc()=false, cache_obj->get_object_id()=300, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.466518] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.466904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.466930] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.466940] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.466951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.466963] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745466962, replica_locations:[]}) [2024-09-13 13:02:25.467037] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=797786, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.468271] INFO [LIB] log_compress_loop_ (ob_log_compressor.cpp:393) [19885][SyslogCompress][T0][Y0-0000000000000000-0-0] [lt=26] log compressor cycles once. (ret=0, cost_time=0, compressed_file_count=0, deleted_file_count=0, disk_remaining_size=182293577728) [2024-09-13 13:02:25.475077] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=26] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:25.481302] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D8E48925-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.481817] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D8E48925-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.501492] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:25.514271] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.514785] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.514815] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.514825] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.514839] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.514863] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745514861, replica_locations:[]}) [2024-09-13 13:02:25.514899] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=34] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.514931] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.514943] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.514980] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.515042] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553392155, cache_obj->added_lc()=false, cache_obj->get_object_id()=301, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.516318] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.516747] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.516770] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.516780] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.516794] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.516810] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745516809, replica_locations:[]}) [2024-09-13 13:02:25.516890] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=747933, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.553072] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.553094] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.553108] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745553057) [2024-09-13 13:02:25.565136] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.565647] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.565671] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.565678] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.565687] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.565700] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745565699, replica_locations:[]}) [2024-09-13 13:02:25.565717] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.565740] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.565750] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.565771] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.565823] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553442939, cache_obj->added_lc()=false, cache_obj->get_object_id()=302, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.567005] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.567337] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.567356] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.567363] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.567375] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.567385] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745567384, replica_locations:[]}) [2024-09-13 13:02:25.567453] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=697370, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.616739] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.617252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.617275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.617281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.617290] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.617306] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745617304, replica_locations:[]}) [2024-09-13 13:02:25.617322] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.617346] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.617355] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.617389] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.617457] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553494558, cache_obj->added_lc()=false, cache_obj->get_object_id()=303, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.618593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.619053] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.619072] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.619079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.619086] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.619096] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745619095, replica_locations:[]}) [2024-09-13 13:02:25.619160] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=645663, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.619682] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=39] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:25.653135] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:25.653168] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.653213] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745653125) [2024-09-13 13:02:25.653225] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745453054, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.653292] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.653309] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.653316] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745653235) [2024-09-13 13:02:25.653548] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A89-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745652567) [2024-09-13 13:02:25.653579] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A89-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203745652567}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:25.653594] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.653601] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.653605] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745653589) [2024-09-13 13:02:25.657788] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:25.669412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.669977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.670007] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.670017] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.670031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.670051] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745670049, replica_locations:[]}) [2024-09-13 13:02:25.670073] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.670102] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.670113] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.670135] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.670189] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553547304, cache_obj->added_lc()=false, cache_obj->get_object_id()=304, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.671289] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=39][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.671772] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.671794] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.671800] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.671811] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.671825] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745671825, replica_locations:[]}) [2024-09-13 13:02:25.671893] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=592930, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.675179] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:25.689826] INFO [COMMON] generate_mod_stat_task (memory_dump.cpp:220) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=1] task info(*task={type_:2, dump_all_:false, p_context_:null, slot_idx_:0, dump_tenant_ctx_:false, tenant_id_:0, ctx_id_:0, p_chunk_:null}) [2024-09-13 13:02:25.689859] INFO [COMMON] handle (memory_dump.cpp:552) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=30] handle dump task(task={type_:2, dump_all_:false, p_context_:null, slot_idx_:0, dump_tenant_ctx_:false, tenant_id_:0, ctx_id_:0, p_chunk_:null}) [2024-09-13 13:02:25.689926] INFO [COMMON] update_check_range (ob_sql_mem_leak_checker.cpp:62) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] update_check_range(min_check_version=0, max_check_version=0, global_version=1) [2024-09-13 13:02:25.695517] INFO handle (memory_dump.cpp:679) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=20] statistics: tenant_cnt: 3, max_chunk_cnt: 524288 tenant_id ctx_id chunk_cnt label_cnt segv_cnt 1 0 83 158 0 1 5 1 4 0 1 7 1 2 0 1 8 49 1 0 1 12 1 1 0 1 16 3 3 0 500 0 48 205 0 500 7 3 4 0 500 8 50 1 0 500 9 2 1 0 500 10 10 2 0 500 16 1 1 0 500 17 7 7 0 500 22 3 49 0 500 23 16 10 0 508 0 3 8 0 508 8 8 1 0 cost_time: 5603 [2024-09-13 13:02:25.695582] INFO [LIB] operator() (ob_malloc_allocator.cpp:519) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=19] [MEMORY] tenant: 1, limit: 3,221,225,472 hold: 355,610,624 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 240,267,264 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= PLAN_CACHE_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= GLIBC hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 102,760,448 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= META_OBJ_CTX_ID hold_bytes= 2,097,152 limit= 644,245,080 [MEMORY] ctx_id= RPC_CTX_ID hold_bytes= 6,291,456 limit= 9,223,372,036,854,775,807 [MEMORY][PM] tid= 20282 used= 2,079,936 hold= 2,097,152 pm=0x2b07d4ed4340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20287 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5152340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20288 used= 2,079,936 hold= 2,097,152 pm=0x2b07d51d0340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20289 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5256340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20290 used= 2,079,936 hold= 2,097,152 pm=0x2b07d52d4340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20291 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5352340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20292 used= 2,079,936 hold= 4,194,304 pm=0x2b07d53d0340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20293 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5456340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20294 used= 2,079,936 hold= 2,097,152 pm=0x2b07d54d4340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20300 used= 0 hold= 2,097,152 pm=0x2b07d5552340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20301 used= 0 hold= 2,097,152 pm=0x2b07d55d0340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20326 used= 2,079,936 hold= 2,097,152 pm=0x2b07d9656340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= summary used= 20,799,360 hold= 27,262,976 [2024-09-13 13:02:25.695855] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=16] [MEMORY] tenant_id= 1 ctx_id= DEFAULT_CTX_ID hold= 240,267,264 used= 224,365,680 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 31,477,760 used= 31,458,304 count= 1 avg_used= 31,458,304 block_cnt= 1 chunk_cnt= 1 mod=ASHListBuffer [MEMORY] hold= 30,760,960 used= 30,611,008 count= 26 avg_used= 1,177,346 block_cnt= 26 chunk_cnt= 17 mod=MysqlRequesReco [MEMORY] hold= 20,807,680 used= 20,797,440 count= 10 avg_used= 2,079,744 block_cnt= 10 chunk_cnt= 10 mod=SqlExecutor [MEMORY] hold= 12,728,512 used= 12,613,593 count= 84 avg_used= 150,161 block_cnt= 28 chunk_cnt= 8 mod=OmtTenant [MEMORY] hold= 11,010,048 used= 10,768,896 count= 192 avg_used= 56,088 block_cnt= 192 chunk_cnt= 29 mod=[T]ObSessionDIB [MEMORY] hold= 10,719,232 used= 10,670,496 count= 10 avg_used= 1,067,049 block_cnt= 10 chunk_cnt= 7 mod=IoControl [MEMORY] hold= 8,777,728 used= 8,760,064 count= 2 avg_used= 4,380,032 block_cnt= 2 chunk_cnt= 2 mod=FreeTbltStream [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=RCSrv [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=ArcFetchQueue [MEMORY] hold= 5,730,304 used= 5,701,632 count= 2 avg_used= 2,850,816 block_cnt= 2 chunk_cnt= 2 mod=ServerObjecPool [MEMORY] hold= 4,943,872 used= 4,915,456 count= 2 avg_used= 2,457,728 block_cnt= 2 chunk_cnt= 2 mod=HashBuckDmId [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDmChe [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMonMap [MEMORY] hold= 3,865,600 used= 3,705,600 count= 800 avg_used= 4,632 block_cnt= 800 chunk_cnt= 10 mod=CkptDgnMemCU [MEMORY] hold= 3,865,600 used= 3,705,600 count= 800 avg_used= 4,632 block_cnt= 800 chunk_cnt= 11 mod=CkptDgnMem [MEMORY] hold= 3,145,728 used= 2,228,224 count= 128 avg_used= 17,408 block_cnt= 128 chunk_cnt= 5 mod=SqlDtlQueue [MEMORY] hold= 3,047,424 used= 3,016,704 count= 4 avg_used= 754,176 block_cnt= 4 chunk_cnt= 4 mod=ResourceGroup [MEMORY] hold= 2,756,608 used= 2,720,005 count= 3 avg_used= 906,668 block_cnt= 3 chunk_cnt= 3 mod=SqlDtl1stBuf [MEMORY] hold= 2,650,112 used= 2,631,360 count= 1 avg_used= 2,631,360 block_cnt= 1 chunk_cnt= 1 mod=RpcStatInfo [MEMORY] hold= 2,379,776 used= 2,359,608 count= 1 avg_used= 2,359,608 block_cnt= 1 chunk_cnt= 1 mod=MediumTabletMap [MEMORY] hold= 2,379,776 used= 2,359,608 count= 1 avg_used= 2,359,608 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDTLINT [MEMORY] hold= 2,375,680 used= 2,359,536 count= 2 avg_used= 1,179,768 block_cnt= 2 chunk_cnt= 2 mod=HashBuckLCSta [MEMORY] hold= 2,248,704 used= 2,228,224 count= 1 avg_used= 2,228,224 block_cnt= 1 chunk_cnt= 1 mod=LogIOCb [MEMORY] hold= 2,169,056 used= 408,600 count= 24,952 avg_used= 16 block_cnt= 266 chunk_cnt= 2 mod=Number [MEMORY] hold= 1,670,976 used= 1,663,936 count= 7 avg_used= 237,705 block_cnt= 7 chunk_cnt= 2 mod=PoolFreeList [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=TabletMap [MEMORY] hold= 1,335,296 used= 1,331,072 count= 1 avg_used= 1,331,072 block_cnt= 1 chunk_cnt= 1 mod=TransService [MEMORY] hold= 1,294,336 used= 1,280,384 count= 2 avg_used= 640,192 block_cnt= 2 chunk_cnt= 1 mod=TransTimeWheel [MEMORY] hold= 1,294,336 used= 1,280,384 count= 2 avg_used= 640,192 block_cnt= 2 chunk_cnt= 1 mod=XATimeWheel [MEMORY] hold= 1,187,840 used= 1,179,768 count= 1 avg_used= 1,179,768 block_cnt= 1 chunk_cnt= 1 mod=RewriteRuleMap [MEMORY] hold= 1,187,840 used= 1,179,768 count= 1 avg_used= 1,179,768 block_cnt= 1 chunk_cnt= 1 mod=HashBuckPlanCac [MEMORY] hold= 1,015,808 used= 1,014,656 count= 4 avg_used= 253,664 block_cnt= 4 chunk_cnt= 3 mod=SQLSessionInfo [MEMORY] hold= 958,464 used= 950,272 count= 1 avg_used= 950,272 block_cnt= 1 chunk_cnt= 1 mod=IOWorkerLQ [MEMORY] hold= 933,888 used= 931,072 count= 1 avg_used= 931,072 block_cnt= 1 chunk_cnt= 1 mod=ArcSenderQueue [MEMORY] hold= 811,008 used= 802,648 count= 11 avg_used= 72,968 block_cnt= 11 chunk_cnt= 5 mod=CommSysVarFac [MEMORY] hold= 802,816 used= 800,000 count= 1 avg_used= 800,000 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMon [MEMORY] hold= 802,816 used= 800,000 count= 1 avg_used= 800,000 block_cnt= 1 chunk_cnt= 1 mod=SqlFltSpanRec [MEMORY] hold= 786,688 used= 524,352 count= 33 avg_used= 15,889 block_cnt= 33 chunk_cnt= 4 mod=LogAlloc [MEMORY] hold= 663,552 used= 659,200 count= 1 avg_used= 659,200 block_cnt= 1 chunk_cnt= 1 mod=MulLevelQueue [MEMORY] hold= 663,552 used= 655,360 count= 1 avg_used= 655,360 block_cnt= 1 chunk_cnt= 1 mod=FetchLog [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=MdsT [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=CoordTR [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=FrzTrigger [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=CoordTF [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=DetectorTimer [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=DupTbLease [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=ElectTimer [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=MultiVersionGC [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=OBJLockGC [MEMORY] hold= 598,016 used= 590,232 count= 1 avg_used= 590,232 block_cnt= 1 chunk_cnt= 1 mod=DagNetIdMap [MEMORY] hold= 516,096 used= 458,752 count= 7 avg_used= 65,536 block_cnt= 7 chunk_cnt= 3 mod=[T]char [MEMORY] hold= 409,600 used= 401,408 count= 1 avg_used= 401,408 block_cnt= 1 chunk_cnt= 1 mod=ReplaySrv [MEMORY] hold= 409,600 used= 401,408 count= 1 avg_used= 401,408 block_cnt= 1 chunk_cnt= 1 mod=ApplySrv [MEMORY] hold= 409,600 used= 389,600 count= 4 avg_used= 97,400 block_cnt= 4 chunk_cnt= 2 mod=ResultSet [MEMORY] hold= 385,024 used= 375,520 count= 2 avg_used= 187,760 block_cnt= 2 chunk_cnt= 2 mod=bf_queue [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=ColUsagHashMap [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=DmlStatHashMap [MEMORY] hold= 262,144 used= 128,768 count= 16 avg_used= 8,048 block_cnt= 16 chunk_cnt= 5 mod=[T]ObPerfEventR [MEMORY] hold= 260,096 used= 253,952 count= 32 avg_used= 7,936 block_cnt= 32 chunk_cnt= 5 mod=SqlSession [MEMORY] hold= 207,072 used= 149,504 count= 258 avg_used= 579 block_cnt= 26 chunk_cnt= 3 mod=LSMap [MEMORY] hold= 204,800 used= 196,744 count= 1 avg_used= 196,744 block_cnt= 1 chunk_cnt= 1 mod=DagNetMap [MEMORY] hold= 204,800 used= 196,616 count= 1 avg_used= 196,616 block_cnt= 1 chunk_cnt= 1 mod=T3MBucket [MEMORY] hold= 204,800 used= 196,744 count= 1 avg_used= 196,744 block_cnt= 1 chunk_cnt= 1 mod=DagMap [MEMORY] hold= 204,800 used= 196,616 count= 1 avg_used= 196,616 block_cnt= 1 chunk_cnt= 1 mod=ResourMapLock [MEMORY] hold= 180,224 used= 131,072 count= 256 avg_used= 512 block_cnt= 24 chunk_cnt= 1 mod=TabletToLS [MEMORY] hold= 147,456 used= 139,264 count= 1 avg_used= 139,264 block_cnt= 1 chunk_cnt= 1 mod=RFLTaskQueue [MEMORY] hold= 114,688 used= 106,496 count= 1 avg_used= 106,496 block_cnt= 1 chunk_cnt= 1 mod=SqlDtlMgr [MEMORY] hold= 108,928 used= 104,512 count= 23 avg_used= 4,544 block_cnt= 23 chunk_cnt= 6 mod=[T]ObTraceEvent [MEMORY] hold= 99,520 used= 24,720 count= 372 avg_used= 66 block_cnt= 98 chunk_cnt= 20 mod=Coro [MEMORY] hold= 92,880 used= 1,280 count= 1,152 avg_used= 1 block_cnt= 12 chunk_cnt= 1 mod=CharsetUtil [MEMORY] hold= 90,112 used= 82,112 count= 1 avg_used= 82,112 block_cnt= 1 chunk_cnt= 1 mod=MetaMemMgr [MEMORY] hold= 89,408 used= 87,296 count= 11 avg_used= 7,936 block_cnt= 11 chunk_cnt= 5 mod=SqlSessiVarMap [MEMORY] hold= 89,408 used= 87,296 count= 11 avg_used= 7,936 block_cnt= 11 chunk_cnt= 5 mod=PlanVaIdx [MEMORY] hold= 81,280 used= 79,360 count= 10 avg_used= 7,936 block_cnt= 10 chunk_cnt= 6 mod=LSIter [MEMORY] hold= 73,728 used= 72,736 count= 1 avg_used= 72,736 block_cnt= 1 chunk_cnt= 1 mod=LogSharedQueueT [MEMORY] hold= 73,728 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=DEVICE_MANAGER [MEMORY] hold= 73,728 used= 66,176 count= 1 avg_used= 66,176 block_cnt= 1 chunk_cnt= 1 mod=Rpc [MEMORY] hold= 65,536 used= 34,816 count= 4 avg_used= 8,704 block_cnt= 4 chunk_cnt= 3 mod=[T]ObDSActionAr [MEMORY] hold= 48,960 used= 44,840 count= 2 avg_used= 22,420 block_cnt= 2 chunk_cnt= 2 mod=DynamicFactor [MEMORY] hold= 45,056 used= 32,768 count= 64 avg_used= 512 block_cnt= 8 chunk_cnt= 3 mod=TxCtxMgr [MEMORY] hold= 43,360 used= 4,608 count= 192 avg_used= 24 block_cnt= 73 chunk_cnt= 19 mod=[T]MemoryContex [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=DagWarnHisBkt [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SuspectInfoBkt [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=Autoincrement [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HB_SERVICE [MEMORY] hold= 40,640 used= 39,680 count= 5 avg_used= 7,936 block_cnt= 5 chunk_cnt= 1 mod=ObDMMDL [MEMORY] hold= 32,768 used= 25,664 count= 1 avg_used= 25,664 block_cnt= 1 chunk_cnt= 1 mod=TSQLSessionMgr [MEMORY] hold= 32,768 used= 17,408 count= 2 avg_used= 8,704 block_cnt= 2 chunk_cnt= 1 mod=ObLogEXTTP [MEMORY] hold= 32,768 used= 24,688 count= 2 avg_used= 12,344 block_cnt= 2 chunk_cnt= 2 mod=TLD_ClientTask [MEMORY] hold= 24,576 used= 16,384 count= 1 avg_used= 16,384 block_cnt= 1 chunk_cnt= 1 mod=SlogWriteBuffer [MEMORY] hold= 24,576 used= 17,664 count= 1 avg_used= 17,664 block_cnt= 1 chunk_cnt= 1 mod=IO_MGR [MEMORY] hold= 19,200 used= 15,360 count= 20 avg_used= 768 block_cnt= 15 chunk_cnt= 8 mod=TGTimer [MEMORY] hold= 17,024 used= 8,448 count= 3 avg_used= 2,816 block_cnt= 2 chunk_cnt= 2 mod=BaseLogWriter [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TLD_TableCtxMgr [MEMORY] hold= 16,384 used= 8,448 count= 1 avg_used= 8,448 block_cnt= 1 chunk_cnt= 1 mod=PalfEnv [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TLD_AssignedMgr [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TLD_TblCtxIMgr [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=backupTaskSched [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TabletStats [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=SlogNopLog [MEMORY] hold= 16,000 used= 15,616 count= 2 avg_used= 7,808 block_cnt= 2 chunk_cnt= 2 mod=HashNodeLCSta [MEMORY] hold= 15,552 used= 12,096 count= 18 avg_used= 672 block_cnt= 16 chunk_cnt= 7 mod=[T]ObWarningBuf [MEMORY] hold= 15,232 used= 10,400 count= 25 avg_used= 416 block_cnt= 12 chunk_cnt= 4 mod=OMT_Worker [MEMORY] hold= 9,664 used= 9,264 count= 2 avg_used= 4,632 block_cnt= 2 chunk_cnt= 1 mod=WorkerMap [MEMORY] hold= 8,576 used= 8,192 count= 2 avg_used= 4,096 block_cnt= 2 chunk_cnt= 2 mod=LinearHashMapDi [MEMORY] hold= 8,576 used= 8,192 count= 2 avg_used= 4,096 block_cnt= 2 chunk_cnt= 2 mod=LinearHashMapCn [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=REPLAY_STATUS [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=APPLY_STATUS [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=LCLSender [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=LockWaitMgr [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=DASIDCache [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ShareBlksMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=HTableLockMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=MdsDebugMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=IORunners [MEMORY] hold= 8,000 used= 7,808 count= 1 avg_used= 7,808 block_cnt= 1 chunk_cnt= 1 mod=HashNodePlanCac [MEMORY] hold= 7,744 used= 5,632 count= 11 avg_used= 512 block_cnt= 9 chunk_cnt= 5 mod=SqlSessiQuerSql [MEMORY] hold= 6,928 used= 4,664 count= 11 avg_used= 424 block_cnt= 9 chunk_cnt= 6 mod=PackStateMap [MEMORY] hold= 6,864 used= 4,664 count= 11 avg_used= 424 block_cnt= 11 chunk_cnt= 4 mod=SequenceMap [MEMORY] hold= 6,864 used= 4,664 count= 11 avg_used= 424 block_cnt= 11 chunk_cnt= 6 mod=ContextsMap [MEMORY] hold= 6,864 used= 4,664 count= 11 avg_used= 424 block_cnt= 11 chunk_cnt= 5 mod=SequenceIdMap [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=PxPoolBkt [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=ResGrpIdMap [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=ResRuleIdMap [MEMORY] hold= 4,288 used= 4,096 count= 1 avg_used= 4,096 block_cnt= 1 chunk_cnt= 1 mod=MacroFile [MEMORY] hold= 3,904 used= 3,712 count= 1 avg_used= 3,712 block_cnt= 1 chunk_cnt= 1 mod=SqlDtlDfc [MEMORY] hold= 3,328 used= 1,792 count= 8 avg_used= 224 block_cnt= 1 chunk_cnt= 1 mod=LogIOTask [MEMORY] hold= 2,048 used= 1,856 count= 1 avg_used= 1,856 block_cnt= 1 chunk_cnt= 1 mod=LogIOWS [MEMORY] hold= 2,000 used= 1,800 count= 1 avg_used= 1,800 block_cnt= 1 chunk_cnt= 1 mod=PxResMgr [MEMORY] hold= 1,792 used= 1,600 count= 1 avg_used= 1,600 block_cnt= 1 chunk_cnt= 1 mod=LogPartFetCtxPo [MEMORY] hold= 1,744 used= 1,352 count= 2 avg_used= 676 block_cnt= 2 chunk_cnt= 2 mod=DetectManager [MEMORY] hold= 1,744 used= 1,544 count= 1 avg_used= 1,544 block_cnt= 1 chunk_cnt= 1 mod=TabStatMgrLock [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=DUP_LS_SET [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=IRMMemHashBuck [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=HashBucApiGroup [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=GROUP_INDEX_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=GCMemtableMap [MEMORY] hold= 1,280 used= 1,080 count= 1 avg_used= 1,080 block_cnt= 1 chunk_cnt= 1 mod=ModuleInitCtx [MEMORY] hold= 1,248 used= 1,056 count= 1 avg_used= 1,056 block_cnt= 1 chunk_cnt= 1 mod=LOG_HASH_MAP [MEMORY] hold= 1,120 used= 120 count= 5 avg_used= 24 block_cnt= 1 chunk_cnt= 1 mod=FreezeTask [MEMORY] hold= 1,024 used= 640 count= 2 avg_used= 320 block_cnt= 1 chunk_cnt= 1 mod=PoolArenaArray [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=DiskUsageTimer [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=FlushTimer [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=CheckPointTimer [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=TabletGC [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=TLD_TIMER [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=TabletShell [MEMORY] hold= 848 used= 648 count= 1 avg_used= 648 block_cnt= 1 chunk_cnt= 1 mod=ResRuleInfo [MEMORY] hold= 752 used= 552 count= 1 avg_used= 552 block_cnt= 1 chunk_cnt= 1 mod=LSFreeze [MEMORY] hold= 576 used= 384 count= 1 avg_used= 384 block_cnt= 1 chunk_cnt= 1 mod=HAScheduler [MEMORY] hold= 576 used= 384 count= 1 avg_used= 384 block_cnt= 1 chunk_cnt= 1 mod=Scheduler [MEMORY] hold= 576 used= 384 count= 1 avg_used= 384 block_cnt= 1 chunk_cnt= 1 mod=MSTXCTX [MEMORY] hold= 544 used= 144 count= 2 avg_used= 72 block_cnt= 2 chunk_cnt= 1 mod=TntSrvObjPool [MEMORY] hold= 512 used= 120 count= 2 avg_used= 60 block_cnt= 2 chunk_cnt= 1 mod=UserResourceMgr [MEMORY] hold= 416 used= 16 count= 2 avg_used= 8 block_cnt= 2 chunk_cnt= 2 mod=ObLogEXTHandler [MEMORY] hold= 352 used= 112 count= 1 avg_used= 112 block_cnt= 1 chunk_cnt= 1 mod=Coordinator [MEMORY] hold= 256 used= 56 count= 1 avg_used= 56 block_cnt= 1 chunk_cnt= 1 mod=ResRuleInfoMap [MEMORY] hold= 208 used= 16 count= 1 avg_used= 16 block_cnt= 1 chunk_cnt= 1 mod=logservice [MEMORY] hold= 224,365,680 used= 219,182,134 count= 29,750 avg_used= 7,367 mod=SUMMARY [2024-09-13 13:02:25.695948] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=70] [MEMORY] tenant_id= 1 ctx_id= PLAN_CACHE_CTX_ID hold= 2,097,152 used= 229,504 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 212,864 used= 193,616 count= 6 avg_used= 32,269 block_cnt= 6 chunk_cnt= 1 mod=SqlPhyPlan [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanCache [MEMORY] hold= 6,528 used= 5,952 count= 3 avg_used= 1,984 block_cnt= 2 chunk_cnt= 1 mod=CreateContext [MEMORY] hold= 1,984 used= 1,600 count= 2 avg_used= 800 block_cnt= 1 chunk_cnt= 1 mod=PlanCache [MEMORY] hold= 229,504 used= 209,104 count= 12 avg_used= 17,425 mod=SUMMARY [2024-09-13 13:02:25.695972] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=14] [MEMORY] tenant_id= 1 ctx_id= GLIBC hold= 2,097,152 used= 80,992 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 80,896 used= 59,018 count= 263 avg_used= 224 block_cnt= 22 chunk_cnt= 1 mod=PlJit [MEMORY] hold= 96 used= 32 count= 1 avg_used= 32 block_cnt= 1 chunk_cnt= 1 mod=PlCodeGen [MEMORY] hold= 80,992 used= 59,050 count= 264 avg_used= 223 mod=SUMMARY [2024-09-13 13:02:25.695985] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] [MEMORY] tenant_id= 1 ctx_id= CO_STACK hold= 102,760,448 used= 99,606,528 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 99,606,528 used= 99,421,248 count= 193 avg_used= 515,136 block_cnt= 193 chunk_cnt= 49 mod=CoStack [MEMORY] hold= 99,606,528 used= 99,421,248 count= 193 avg_used= 515,136 mod=SUMMARY [2024-09-13 13:02:25.696010] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] [MEMORY] tenant_id= 1 ctx_id= META_OBJ_CTX_ID hold= 2,097,152 used= 401,408 limit= 644,245,080 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 401,408 used= 400,064 count= 2 avg_used= 200,032 block_cnt= 2 chunk_cnt= 1 mod=PoolFreeList [MEMORY] hold= 401,408 used= 400,064 count= 2 avg_used= 200,032 mod=SUMMARY [2024-09-13 13:02:25.696033] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] [MEMORY] tenant_id= 1 ctx_id= RPC_CTX_ID hold= 6,291,456 used= 368,640 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 270,336 used= 178,816 count= 11 avg_used= 16,256 block_cnt= 11 chunk_cnt= 3 mod=[L]OB_REMOTE_SY [MEMORY] hold= 73,728 used= 48,768 count= 3 avg_used= 16,256 block_cnt= 3 chunk_cnt= 2 mod=[L]OB_REMOTE_EX [MEMORY] hold= 24,576 used= 16,256 count= 1 avg_used= 16,256 block_cnt= 1 chunk_cnt= 1 mod=[L]OB_PX_TARGET [MEMORY] hold= 368,640 used= 243,840 count= 15 avg_used= 16,256 mod=SUMMARY [2024-09-13 13:02:25.696139] INFO [LIB] operator() (ob_malloc_allocator.cpp:519) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=8] [MEMORY] tenant: 500, limit: 9,223,372,036,854,775,807 hold: 540,209,152 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 169,332,736 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= GLIBC hold_bytes= 6,291,456 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 104,857,600 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LIBEASY hold_bytes= 4,194,304 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LOGGER_CTX_ID hold_bytes= 20,971,520 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= RPC_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= PKT_NIO hold_bytes= 18,989,056 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= SCHEMA_SERVICE hold_bytes= 11,292,672 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= UNEXPECTED_IN_500 hold_bytes= 202,182,656 limit= 9,223,372,036,854,775,807 [2024-09-13 13:02:25.696359] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=16] [MEMORY] tenant_id= 500 ctx_id= DEFAULT_CTX_ID hold= 169,332,736 used= 163,948,608 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 33,574,912 used= 33,554,464 count= 1 avg_used= 33,554,464 block_cnt= 1 chunk_cnt= 1 mod=BloomFilter [MEMORY] hold= 12,779,520 used= 12,760,352 count= 1 avg_used= 12,760,352 block_cnt= 1 chunk_cnt= 1 mod=MemDumpContext [MEMORY] hold= 11,526,144 used= 11,273,688 count= 201 avg_used= 56,088 block_cnt= 201 chunk_cnt= 28 mod=[T]ObSessionDIB [MEMORY] hold= 10,792,960 used= 10,735,904 count= 11 avg_used= 975,991 block_cnt= 11 chunk_cnt= 8 mod=IoControl [MEMORY] hold= 9,457,664 used= 9,437,784 count= 1 avg_used= 9,437,784 block_cnt= 1 chunk_cnt= 1 mod=HashBuckInteChe [MEMORY] hold= 6,919,312 used= 6,816,688 count= 51 avg_used= 133,660 block_cnt= 31 chunk_cnt= 12 mod=PartitTableTask [MEMORY] hold= 5,218,304 used= 5,157,040 count= 6 avg_used= 859,506 block_cnt= 6 chunk_cnt= 4 mod=KvstCachWashStr [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=PxP2PDhMgrKey [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashPxBlooFilKe [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashBucTenComMo [MEMORY] hold= 4,307,456 used= 4,278,711 count= 3 avg_used= 1,426,237 block_cnt= 3 chunk_cnt= 3 mod=SqlDtlMgr [MEMORY] hold= 4,232,192 used= 4,203,008 count= 6 avg_used= 700,501 block_cnt= 5 chunk_cnt= 4 mod=BaseLogWriter [MEMORY] hold= 4,214,896 used= 4,194,328 count= 2 avg_used= 2,097,164 block_cnt= 2 chunk_cnt= 2 mod=SerFuncRegHT [MEMORY] hold= 4,194,304 used= 4,176,267 count= 1 avg_used= 4,176,267 block_cnt= 1 chunk_cnt= 1 mod=SyslogCompress [MEMORY] hold= 3,997,696 used= 3,904,096 count= 12 avg_used= 325,341 block_cnt= 12 chunk_cnt= 6 mod=DedupQueue [MEMORY] hold= 2,379,776 used= 2,359,608 count= 1 avg_used= 2,359,608 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdUnitMa [MEMORY] hold= 2,167,552 used= 2,129,912 count= 7 avg_used= 304,273 block_cnt= 7 chunk_cnt= 4 mod=FixedQueue [MEMORY] hold= 2,061,920 used= 1,937,608 count= 491 avg_used= 3,946 block_cnt= 246 chunk_cnt= 3 mod=CharsetInit [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=DInsSstMgr [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=IdConnMap [MEMORY] hold= 1,548,288 used= 1,507,118 count= 5 avg_used= 301,423 block_cnt= 5 chunk_cnt= 3 mod=LDIOSetup [MEMORY] hold= 1,432,576 used= 1,411,072 count= 36 avg_used= 39,196 block_cnt= 36 chunk_cnt= 12 mod=CommonArray [MEMORY] hold= 1,327,104 used= 1,114,112 count= 1,026 avg_used= 1,085 block_cnt= 97 chunk_cnt= 2 mod=TabletLSMap [MEMORY] hold= 1,286,144 used= 1,270,296 count= 2 avg_used= 635,148 block_cnt= 2 chunk_cnt= 2 mod=PxResMgr [MEMORY] hold= 1,230,208 used= 1,213,504 count= 10 avg_used= 121,350 block_cnt= 9 chunk_cnt= 8 mod=TenantCtxAlloca [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=TsSourceInfoMap [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=TenantResCtrl [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=ConcurHashMap [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=ResRuleInfoMap [MEMORY] hold= 941,984 used= 925,792 count= 3 avg_used= 308,597 block_cnt= 3 chunk_cnt= 2 mod=Omt [MEMORY] hold= 936,832 used= 855,104 count= 431 avg_used= 1,984 block_cnt= 146 chunk_cnt= 22 mod=CreateContext [MEMORY] hold= 933,888 used= 917,576 count= 2 avg_used= 458,788 block_cnt= 2 chunk_cnt= 2 mod=CACHE_INST_MAP [MEMORY] hold= 761,856 used= 733,184 count= 4 avg_used= 183,296 block_cnt= 4 chunk_cnt= 3 mod=LightyQueue [MEMORY] hold= 709,152 used= 658,248 count= 11 avg_used= 59,840 block_cnt= 11 chunk_cnt= 6 mod=HashBucket [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=EventTimer [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=GEleTimer [MEMORY] hold= 622,592 used= 619,008 count= 1 avg_used= 619,008 block_cnt= 1 chunk_cnt= 1 mod=SysTaskStatus [MEMORY] hold= 557,056 used= 295,936 count= 34 avg_used= 8,704 block_cnt= 34 chunk_cnt= 9 mod=[T]ObDSActionAr [MEMORY] hold= 544,448 used= 531,712 count= 67 avg_used= 7,936 block_cnt= 67 chunk_cnt= 3 mod=ModulePageAlloc [MEMORY] hold= 510,880 used= 442,352 count= 280 avg_used= 1,579 block_cnt= 85 chunk_cnt= 10 mod=tg [MEMORY] hold= 442,368 used= 393,216 count= 6 avg_used= 65,536 block_cnt= 6 chunk_cnt= 3 mod=[T]char [MEMORY] hold= 401,408 used= 393,256 count= 1 avg_used= 393,256 block_cnt= 1 chunk_cnt= 1 mod=TablStorStatMgr [MEMORY] hold= 369,088 used= 330,944 count= 7 avg_used= 47,277 block_cnt= 7 chunk_cnt= 3 mod=Rpc [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=register_task [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=register_tasks [MEMORY] hold= 229,376 used= 224,000 count= 1 avg_used= 224,000 block_cnt= 1 chunk_cnt= 1 mod=BGTMonitor [MEMORY] hold= 221,184 used= 212,992 count= 1 avg_used= 212,992 block_cnt= 1 chunk_cnt= 1 mod=TSWorker [MEMORY] hold= 221,184 used= 214,432 count= 1 avg_used= 214,432 block_cnt= 1 chunk_cnt= 1 mod=CompSuggestMgr [MEMORY] hold= 215,024 used= 210,352 count= 27 avg_used= 7,790 block_cnt= 27 chunk_cnt= 8 mod=HashNode [MEMORY] hold= 212,992 used= 207,168 count= 1 avg_used= 207,168 block_cnt= 1 chunk_cnt= 1 mod=TenantMutilAllo [MEMORY] hold= 212,992 used= 196,624 count= 2 avg_used= 98,312 block_cnt= 2 chunk_cnt= 1 mod=DRTaskMap [MEMORY] hold= 212,992 used= 196,624 count= 2 avg_used= 98,312 block_cnt= 2 chunk_cnt= 1 mod=DdlQue [MEMORY] hold= 207,088 used= 149,504 count= 258 avg_used= 579 block_cnt= 108 chunk_cnt= 5 mod=LSLocationMap [MEMORY] hold= 206,304 used= 193,824 count= 24 avg_used= 8,076 block_cnt= 24 chunk_cnt= 3 mod=MallocInfoMap [MEMORY] hold= 197,344 used= 158,736 count= 12 avg_used= 13,228 block_cnt= 11 chunk_cnt= 6 mod=BucketLock [MEMORY] hold= 180,224 used= 172,064 count= 1 avg_used= 172,064 block_cnt= 1 chunk_cnt= 1 mod=TenantMBList [MEMORY] hold= 171,312 used= 167,080 count= 22 avg_used= 7,594 block_cnt= 22 chunk_cnt= 2 mod=TenaSpaTabIdSet [MEMORY] hold= 163,840 used= 147,792 count= 2 avg_used= 73,896 block_cnt= 2 chunk_cnt= 1 mod=HashNodNexWaiMa [MEMORY] hold= 155,648 used= 147,624 count= 1 avg_used= 147,624 block_cnt= 1 chunk_cnt= 1 mod=OB_DISK_REP [MEMORY] hold= 155,648 used= 147,624 count= 1 avg_used= 147,624 block_cnt= 1 chunk_cnt= 1 mod=UsrRuleMap [MEMORY] hold= 155,648 used= 148,032 count= 1 avg_used= 148,032 block_cnt= 1 chunk_cnt= 1 mod=CompEventMgr [MEMORY] hold= 155,312 used= 151,464 count= 20 avg_used= 7,573 block_cnt= 20 chunk_cnt= 3 mod=SysTableNameMap [MEMORY] hold= 147,456 used= 130,816 count= 2 avg_used= 65,408 block_cnt= 2 chunk_cnt= 2 mod=KVCACHE_HAZARD [MEMORY] hold= 147,456 used= 145,936 count= 2 avg_used= 72,968 block_cnt= 2 chunk_cnt= 2 mod=CommSysVarFac [MEMORY] hold= 139,264 used= 131,776 count= 1 avg_used= 131,776 block_cnt= 1 chunk_cnt= 1 mod=GtsTaskQueue [MEMORY] hold= 138,816 used= 134,912 count= 11 avg_used= 12,264 block_cnt= 11 chunk_cnt= 3 mod=SeArray [MEMORY] hold= 131,072 used= 130,384 count= 2 avg_used= 65,192 block_cnt= 2 chunk_cnt= 2 mod=LatchStat [MEMORY] hold= 124,464 used= 122,288 count= 16 avg_used= 7,643 block_cnt= 16 chunk_cnt= 4 mod=HashNodeConfCon [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 2 mod=RefrFullScheMap [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 1 mod=MemMgrMap [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 2 mod=MemMgrForLiboMa [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 2 mod=TenaSchForCacMa [MEMORY] hold= 114,688 used= 110,736 count= 2 avg_used= 55,368 block_cnt= 2 chunk_cnt= 1 mod=DepInfoTaskQ [MEMORY] hold= 114,688 used= 111,096 count= 1 avg_used= 111,096 block_cnt= 1 chunk_cnt= 1 mod=NonPartTenMap [MEMORY] hold= 114,688 used= 111,096 count= 1 avg_used= 111,096 block_cnt= 1 chunk_cnt= 1 mod=IndNameMap [MEMORY] hold= 114,336 used= 106,088 count= 2 avg_used= 53,044 block_cnt= 2 chunk_cnt= 2 mod=RetryCtrl [MEMORY] hold= 106,496 used= 98,312 count= 1 avg_used= 98,312 block_cnt= 1 chunk_cnt= 1 mod=TmpFileManager [MEMORY] hold= 106,496 used= 92,160 count= 2 avg_used= 46,080 block_cnt= 2 chunk_cnt= 2 mod=LDBlockBitMap [MEMORY] hold= 106,496 used= 86,408 count= 5 avg_used= 17,281 block_cnt= 5 chunk_cnt= 3 mod=HashBuckConfCon [MEMORY] hold= 105,664 used= 103,168 count= 13 avg_used= 7,936 block_cnt= 13 chunk_cnt= 6 mod=HashMapArray [MEMORY] hold= 98,304 used= 83,072 count= 2 avg_used= 41,536 block_cnt= 2 chunk_cnt= 1 mod=IO_MGR [MEMORY] hold= 95,408 used= 25,600 count= 353 avg_used= 72 block_cnt= 201 chunk_cnt= 19 mod=Coro [MEMORY] hold= 81,920 used= 74,064 count= 2 avg_used= 37,032 block_cnt= 2 chunk_cnt= 2 mod=io_trace_map [MEMORY] hold= 73,728 used= 65,600 count= 1 avg_used= 65,600 block_cnt= 1 chunk_cnt= 1 mod=TCREF [MEMORY] hold= 73,728 used= 69,664 count= 1 avg_used= 69,664 block_cnt= 1 chunk_cnt= 1 mod=SuperBlockBuffe [MEMORY] hold= 65,536 used= 63,272 count= 1 avg_used= 63,272 block_cnt= 1 chunk_cnt= 1 mod=SqlSessionSbloc [MEMORY] hold= 65,264 used= 63,096 count= 2 avg_used= 31,548 block_cnt= 2 chunk_cnt= 1 mod=ScheCacSysCacMa [MEMORY] hold= 65,024 used= 63,488 count= 8 avg_used= 7,936 block_cnt= 8 chunk_cnt= 6 mod=SessionInfoHash [MEMORY] hold= 56,832 used= 54,528 count= 12 avg_used= 4,544 block_cnt= 12 chunk_cnt= 5 mod=[T]ObTraceEvent [MEMORY] hold= 49,152 used= 36,912 count= 2 avg_used= 18,456 block_cnt= 2 chunk_cnt= 2 mod=HashBuckSysConf [MEMORY] hold= 49,152 used= 32,768 count= 2 avg_used= 16,384 block_cnt= 2 chunk_cnt= 2 mod=CACHE_TNT_LST [MEMORY] hold= 49,152 used= 37,032 count= 3 avg_used= 12,344 block_cnt= 3 chunk_cnt= 2 mod=ReferedMap [MEMORY] hold= 47,168 used= 45,056 count= 11 avg_used= 4,096 block_cnt= 11 chunk_cnt= 7 mod=LinearHashMapDi [MEMORY] hold= 47,168 used= 45,056 count= 11 avg_used= 4,096 block_cnt= 11 chunk_cnt= 6 mod=LinearHashMapCn [MEMORY] hold= 44,608 used= 34,560 count= 14 avg_used= 2,468 block_cnt= 11 chunk_cnt= 5 mod=TGTimer [MEMORY] hold= 44,192 used= 4,800 count= 200 avg_used= 24 block_cnt= 121 chunk_cnt= 17 mod=[T]MemoryContex [MEMORY] hold= 41,280 used= 37,264 count= 2 avg_used= 18,632 block_cnt= 2 chunk_cnt= 1 mod=TaskRunnerSer [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdConfMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdPoolMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=ObLongopsMgr [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucNamConMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucNamPooMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SessHoldMapBuck [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucSerUniMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucPooUniMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HasBucConRefCoM [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucConPooMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucTenPooMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=DDLSpeedCtrl [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HasBucSerMigUnM [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=TmpFileStoreMap [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=ProxySessBuck [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SqlLoadData [MEMORY] hold= 40,384 used= 39,680 count= 5 avg_used= 7,936 block_cnt= 5 chunk_cnt= 3 mod=SqlSession [MEMORY] hold= 29,184 used= 25,320 count= 20 avg_used= 1,266 block_cnt= 14 chunk_cnt= 7 mod=ObGuard [MEMORY] hold= 25,728 used= 23,736 count= 10 avg_used= 2,373 block_cnt= 7 chunk_cnt= 5 mod=RpcProcessor [MEMORY] hold= 25,440 used= 24,264 count= 6 avg_used= 4,044 block_cnt= 5 chunk_cnt= 2 mod=ScheObSchemAren [MEMORY] hold= 25,104 used= 20,784 count= 2 avg_used= 10,392 block_cnt= 2 chunk_cnt= 2 mod=SqlNio [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=leakMap [MEMORY] hold= 24,576 used= 16,384 count= 1 avg_used= 16,384 block_cnt= 1 chunk_cnt= 1 mod=SlogWriteBuffer [MEMORY] hold= 24,576 used= 17,408 count= 1 avg_used= 17,408 block_cnt= 1 chunk_cnt= 1 mod=SvrStartupHandl [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=ServerCkptSlogH [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=GrpIdNameMap [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=FuncRuleMap [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=GrpNameIdMap [MEMORY] hold= 24,352 used= 21,672 count= 2 avg_used= 10,836 block_cnt= 2 chunk_cnt= 2 mod=SchemaStatuMap [MEMORY] hold= 23,552 used= 20,352 count= 16 avg_used= 1,272 block_cnt= 16 chunk_cnt= 2 mod=IO_GROUP_MAP [MEMORY] hold= 18,928 used= 18,144 count= 4 avg_used= 4,536 block_cnt= 4 chunk_cnt= 2 mod=DeviceMng [MEMORY] hold= 17,200 used= 8,944 count= 43 avg_used= 208 block_cnt= 7 chunk_cnt= 2 mod=Scheduler [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TbltRefreshMap [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=ServerLogPool [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=GenSchemVersMap [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=MemDumpMap [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=LinkArray [MEMORY] hold= 16,384 used= 9,336 count= 1 avg_used= 9,336 block_cnt= 1 chunk_cnt= 1 mod=InnerLobHash [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=HasBucTimZonInM [MEMORY] hold= 16,384 used= 9,392 count= 1 avg_used= 9,392 block_cnt= 1 chunk_cnt= 1 mod=TenCompProgMgr [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=StorageHADiag [MEMORY] hold= 16,384 used= 8,992 count= 1 avg_used= 8,992 block_cnt= 1 chunk_cnt= 1 mod=IO_HEALTH [MEMORY] hold= 16,384 used= 12,296 count= 1 avg_used= 12,296 block_cnt= 1 chunk_cnt= 1 mod=ResourMapLock [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=SlogNopLog [MEMORY] hold= 16,320 used= 15,904 count= 2 avg_used= 7,952 block_cnt= 2 chunk_cnt= 1 mod=UpgProcSet [MEMORY] hold= 16,128 used= 15,872 count= 2 avg_used= 7,936 block_cnt= 2 chunk_cnt= 2 mod=PlanVaIdx [MEMORY] hold= 16,000 used= 15,872 count= 2 avg_used= 7,936 block_cnt= 2 chunk_cnt= 1 mod=CommSysVarDefVa [MEMORY] hold= 11,520 used= 8,384 count= 16 avg_used= 524 block_cnt= 6 chunk_cnt= 3 mod=RpcBuffer [MEMORY] hold= 11,520 used= 9,216 count= 12 avg_used= 768 block_cnt= 9 chunk_cnt= 6 mod=timer [MEMORY] hold= 9,200 used= 9,064 count= 2 avg_used= 4,532 block_cnt= 2 chunk_cnt= 1 mod=RedisTypeMap [MEMORY] hold= 8,640 used= 8,048 count= 3 avg_used= 2,682 block_cnt= 3 chunk_cnt= 3 mod=TenantInfo [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ObTsTenantInfoN [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerBlacklist [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=InneSqlConnPool [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerIdcMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerRegioMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=SchemaRowKey [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerCidMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=IORunners [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=RsEventQueue [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=RpcKeepalive [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=SqlSessiVarMap [MEMORY] hold= 8,000 used= 7,808 count= 1 avg_used= 7,808 block_cnt= 1 chunk_cnt= 1 mod=SessHoldMapNode [MEMORY] hold= 7,872 used= 7,808 count= 1 avg_used= 7,808 block_cnt= 1 chunk_cnt= 1 mod=HasNodTzInfM [MEMORY] hold= 6,816 used= 2,304 count= 24 avg_used= 96 block_cnt= 24 chunk_cnt= 3 mod=PThread [MEMORY] hold= 5,728 used= 5,336 count= 2 avg_used= 2,668 block_cnt= 2 chunk_cnt= 2 mod=DeadLock [MEMORY] hold= 5,376 used= 5,248 count= 2 avg_used= 2,624 block_cnt= 1 chunk_cnt= 1 mod=RootContext [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=SqlPx [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=RebuildCtx [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDmReq [MEMORY] hold= 4,016 used= 3,816 count= 1 avg_used= 3,816 block_cnt= 1 chunk_cnt= 1 mod=RecScheHisMap [MEMORY] hold= 4,016 used= 3,816 count= 1 avg_used= 3,816 block_cnt= 1 chunk_cnt= 1 mod=RemMasterMap [MEMORY] hold= 3,264 used= 3,200 count= 1 avg_used= 3,200 block_cnt= 1 chunk_cnt= 1 mod=TenantTZ [MEMORY] hold= 2,768 used= 2,704 count= 1 avg_used= 2,704 block_cnt= 1 chunk_cnt= 1 mod=LoggerAlloc [MEMORY] hold= 2,592 used= 2,016 count= 3 avg_used= 672 block_cnt= 3 chunk_cnt= 3 mod=[T]ObWarningBuf [MEMORY] hold= 2,528 used= 2,328 count= 1 avg_used= 2,328 block_cnt= 1 chunk_cnt= 1 mod=StorageS3 [MEMORY] hold= 2,528 used= 2,328 count= 1 avg_used= 2,328 block_cnt= 1 chunk_cnt= 1 mod=SqlCompile [MEMORY] hold= 2,128 used= 1,080 count= 5 avg_used= 216 block_cnt= 4 chunk_cnt= 4 mod=ObFuture [MEMORY] hold= 2,112 used= 1,920 count= 1 avg_used= 1,920 block_cnt= 1 chunk_cnt= 1 mod=LobManager [MEMORY] hold= 1,648 used= 1,448 count= 1 avg_used= 1,448 block_cnt= 1 chunk_cnt= 1 mod=GtsRequestRpc [MEMORY] hold= 1,616 used= 1,416 count= 1 avg_used= 1,416 block_cnt= 1 chunk_cnt= 1 mod=GtiRequestRpc [MEMORY] hold= 1,568 used= 1,344 count= 1 avg_used= 1,344 block_cnt= 1 chunk_cnt= 1 mod=SchemaService [MEMORY] hold= 1,520 used= 1,328 count= 1 avg_used= 1,328 block_cnt= 1 chunk_cnt= 1 mod=GtsRpcProxy [MEMORY] hold= 1,520 used= 1,328 count= 1 avg_used= 1,328 block_cnt= 1 chunk_cnt= 1 mod=GtiRpcProxy [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=HashBucRefObj [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=GROUP_INDEX_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=INGRESS_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=IO_CHANNEL_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=TENANT_PLAN_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=Autoincrement [MEMORY] hold= 1,328 used= 280 count= 5 avg_used= 56 block_cnt= 5 chunk_cnt= 2 mod=Log [MEMORY] hold= 1,280 used= 1,088 count= 1 avg_used= 1,088 block_cnt= 1 chunk_cnt= 1 mod=memdumpqueue [MEMORY] hold= 1,216 used= 960 count= 2 avg_used= 480 block_cnt= 2 chunk_cnt= 2 mod=TntResourceMgr [MEMORY] hold= 992 used= 112 count= 4 avg_used= 28 block_cnt= 4 chunk_cnt= 3 mod=KeepAliveServer [MEMORY] hold= 912 used= 264 count= 3 avg_used= 88 block_cnt= 3 chunk_cnt= 2 mod=DestKAState [MEMORY] hold= 896 used= 704 count= 1 avg_used= 704 block_cnt= 1 chunk_cnt= 1 mod=ScheMgrCacheMap [MEMORY] hold= 704 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=SqlString [MEMORY] hold= 704 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=SqlSessiQuerSql [MEMORY] hold= 704 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=TsMgr [MEMORY] hold= 672 used= 272 count= 2 avg_used= 136 block_cnt= 2 chunk_cnt= 1 mod=unknown [MEMORY] hold= 624 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=PackStateMap [MEMORY] hold= 624 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=SequenceMap [MEMORY] hold= 624 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=ContextsMap [MEMORY] hold= 624 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=SequenceIdMap [MEMORY] hold= 416 used= 32 count= 2 avg_used= 16 block_cnt= 2 chunk_cnt= 1 mod=CreateEntity [MEMORY] hold= 352 used= 160 count= 1 avg_used= 160 block_cnt= 1 chunk_cnt= 1 mod=OccamTimeGuard [MEMORY] hold= 272 used= 56 count= 1 avg_used= 56 block_cnt= 1 chunk_cnt= 1 mod=PxTargetMgr [MEMORY] hold= 128 used= 7 count= 1 avg_used= 7 block_cnt= 1 chunk_cnt= 1 mod=SqlExpr [MEMORY] hold= 163,948,608 used= 161,175,799 count= 4,089 avg_used= 39,416 mod=SUMMARY [2024-09-13 13:02:25.696485] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=96] [MEMORY] tenant_id= 500 ctx_id= GLIBC hold= 6,291,456 used= 2,793,104 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 1,844,448 used= 905,967 count= 12,922 avg_used= 70 block_cnt= 234 chunk_cnt= 2 mod=Buffer [MEMORY] hold= 892,096 used= 602,046 count= 3,112 avg_used= 193 block_cnt= 193 chunk_cnt= 3 mod=glibc_malloc [MEMORY] hold= 53,600 used= 37,729 count= 229 avg_used= 164 block_cnt= 23 chunk_cnt= 2 mod=S3SDK [MEMORY] hold= 2,960 used= 1,222 count= 20 avg_used= 61 block_cnt= 7 chunk_cnt= 2 mod=XmlGlobal [MEMORY] hold= 2,793,104 used= 1,546,964 count= 16,283 avg_used= 95 mod=SUMMARY [2024-09-13 13:02:25.696496] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=5] [MEMORY] tenant_id= 500 ctx_id= CO_STACK hold= 104,857,600 used= 103,219,200 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 103,219,200 used= 103,027,200 count= 200 avg_used= 515,136 block_cnt= 200 chunk_cnt= 50 mod=CoStack [MEMORY] hold= 103,219,200 used= 103,027,200 count= 200 avg_used= 515,136 mod=SUMMARY [2024-09-13 13:02:25.696512] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=10] [MEMORY] tenant_id= 500 ctx_id= LIBEASY hold= 4,194,304 used= 3,596,256 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 3,596,256 used= 3,464,800 count= 143 avg_used= 24,229 block_cnt= 24 chunk_cnt= 2 mod=OB_TEST2_PCODE [MEMORY] hold= 3,596,256 used= 3,464,800 count= 143 avg_used= 24,229 mod=SUMMARY [2024-09-13 13:02:25.696527] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=8] [MEMORY] tenant_id= 500 ctx_id= LOGGER_CTX_ID hold= 20,971,520 used= 20,807,680 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 16,646,144 used= 16,637,952 count= 8 avg_used= 2,079,744 block_cnt= 8 chunk_cnt= 8 mod=Logger [MEMORY] hold= 4,161,536 used= 4,159,488 count= 2 avg_used= 2,079,744 block_cnt= 2 chunk_cnt= 2 mod=ErrorLogger [MEMORY] hold= 20,807,680 used= 20,797,440 count= 10 avg_used= 2,079,744 mod=SUMMARY [2024-09-13 13:02:25.696884] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=4] [MEMORY] tenant_id= 500 ctx_id= RPC_CTX_ID hold= 2,097,152 used= 24,576 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 24,576 used= 16,256 count= 1 avg_used= 16,256 block_cnt= 1 chunk_cnt= 1 mod=RpcDefault [MEMORY] hold= 24,576 used= 16,256 count= 1 avg_used= 16,256 mod=SUMMARY [2024-09-13 13:02:25.696903] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=8] [MEMORY] tenant_id= 500 ctx_id= PKT_NIO hold= 18,989,056 used= 16,145,744 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 12,754,256 used= 12,676,888 count= 27 avg_used= 469,514 block_cnt= 12 chunk_cnt= 6 mod=DEFAULT [MEMORY] hold= 1,671,168 used= 1,571,712 count= 12 avg_used= 130,976 block_cnt= 12 chunk_cnt= 2 mod=PKTS_INBUF [MEMORY] hold= 1,253,376 used= 1,178,784 count= 9 avg_used= 130,976 block_cnt= 9 chunk_cnt= 3 mod=PKTC_INBUF [MEMORY] hold= 221,184 used= 146,592 count= 9 avg_used= 16,288 block_cnt= 9 chunk_cnt= 2 mod=SERVER_CTX_CHUN [MEMORY] hold= 98,304 used= 65,152 count= 4 avg_used= 16,288 block_cnt= 4 chunk_cnt= 1 mod=SERVER_RESP_CHU [MEMORY] hold= 73,728 used= 48,864 count= 3 avg_used= 16,288 block_cnt= 3 chunk_cnt= 1 mod=CLIENT_CB_CHUNK [MEMORY] hold= 73,728 used= 48,864 count= 3 avg_used= 16,288 block_cnt= 3 chunk_cnt= 1 mod=CLIENT_REQ_CHUN [MEMORY] hold= 16,145,744 used= 15,736,856 count= 67 avg_used= 234,878 mod=SUMMARY [2024-09-13 13:02:25.696982] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=14] [MEMORY] tenant_id= 500 ctx_id= SCHEMA_SERVICE hold= 11,292,672 used= 9,801,696 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 7,098,368 used= 7,078,824 count= 1 avg_used= 7,078,824 block_cnt= 1 chunk_cnt= 1 mod=SchemaIdVersion [MEMORY] hold= 2,088,896 used= 2,087,680 count= 2 avg_used= 1,043,840 block_cnt= 2 chunk_cnt= 2 mod=TenantSchemMgr [MEMORY] hold= 294,912 used= 262,144 count= 4 avg_used= 65,536 block_cnt= 4 chunk_cnt= 1 mod=SchemaMgrCache [MEMORY] hold= 200,832 used= 197,618 count= 5 avg_used= 39,523 block_cnt= 3 chunk_cnt= 1 mod=SchemaSysCache [MEMORY] hold= 32,384 used= 31,616 count= 4 avg_used= 7,904 block_cnt= 4 chunk_cnt= 1 mod=ScheTenaInfoVec [MEMORY] hold= 16,832 used= 16,064 count= 4 avg_used= 4,016 block_cnt= 4 chunk_cnt= 1 mod=SchemaSysVariab [MEMORY] hold= 16,192 used= 15,808 count= 2 avg_used= 7,904 block_cnt= 2 chunk_cnt= 1 mod=ScheTablInfoVec [MEMORY] hold= 2,560 used= 1,024 count= 8 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabeSeCompo [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheIndeNameMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheTablIdMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=HiddenTblNames [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheRoutIdMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheRoutNameMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheTablNameMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=SchePackIdMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=SchePackNameMap [MEMORY] hold= 1,920 used= 768 count= 6 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabeSePolic [MEMORY] hold= 1,920 used= 768 count= 6 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabeSeLabel [MEMORY] hold= 1,408 used= 1,024 count= 2 avg_used= 512 block_cnt= 2 chunk_cnt= 1 mod=ScheUdtNameMap [MEMORY] hold= 1,408 used= 1,024 count= 2 avg_used= 512 block_cnt= 2 chunk_cnt= 1 mod=ScheUdtIdMap [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=DBLINK_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaProfile [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabSeUserLe [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaSynonym [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=DIRECTORY_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=RLS_POLICY_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlSqlMap [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 4 chunk_cnt= 1 mod=RLS_GROUP_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=RLS_CONTEXT_MGR [MEMORY] hold= 784 used= 584 count= 1 avg_used= 584 block_cnt= 1 chunk_cnt= 1 mod=TenSchMemMgrFoL [MEMORY] hold= 784 used= 584 count= 1 avg_used= 584 block_cnt= 1 chunk_cnt= 1 mod=TenaScheMemMgr [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaKeystore [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheDataNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheAuxVpNameVe [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheForKeyNamMa [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=MockFkParentTab [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaContext [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheConsNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlIdMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchePriTabPriMa [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=PRIV_ROUTINE [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchePriObjPriMa [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaTablespac [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheTrigIdMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheTrigNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaUdf [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaSequence [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaSecurAudi [MEMORY] hold= 9,801,696 used= 9,721,130 count= 136 avg_used= 71,478 mod=SUMMARY [2024-09-13 13:02:25.697023] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=26] [MEMORY] tenant_id= 500 ctx_id= UNEXPECTED_IN_500 hold= 202,182,656 used= 200,226,400 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 134,238,416 used= 134,217,736 count= 2 avg_used= 67,108,868 block_cnt= 2 chunk_cnt= 2 mod=CACHE_MAP_BKT [MEMORY] hold= 18,173,952 used= 18,155,880 count= 1 avg_used= 18,155,880 block_cnt= 1 chunk_cnt= 1 mod=CACHE_MB_HANDLE [MEMORY] hold= 17,338,048 used= 8,968,960 count= 1,093 avg_used= 8,205 block_cnt= 1,093 chunk_cnt= 9 mod=StorageLoggerM [MEMORY] hold= 16,807,040 used= 16,786,208 count= 3 avg_used= 5,595,402 block_cnt= 3 chunk_cnt= 2 mod=FixeSizeBlocAll [MEMORY] hold= 6,311,936 used= 6,291,472 count= 1 avg_used= 6,291,472 block_cnt= 1 chunk_cnt= 1 mod=CACHE_MAP_LOCK [MEMORY] hold= 3,698,656 used= 3,284,064 count= 76 avg_used= 43,211 block_cnt= 54 chunk_cnt= 4 mod=OccamThreadPool [MEMORY] hold= 3,592,192 used= 3,573,960 count= 1 avg_used= 3,573,960 block_cnt= 1 chunk_cnt= 1 mod=TenantConfig [MEMORY] hold= 44,480 used= 35,968 count= 2 avg_used= 17,984 block_cnt= 2 chunk_cnt= 2 mod=CommonNetwork [MEMORY] hold= 13,552 used= 1,440 count= 90 avg_used= 16 block_cnt= 3 chunk_cnt= 2 mod=ConfigChecker [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=BlockMap [MEMORY] hold= 200,226,400 used= 191,323,624 count= 1,270 avg_used= 150,648 mod=SUMMARY [2024-09-13 13:02:25.697041] INFO [LIB] operator() (ob_malloc_allocator.cpp:519) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=10] [MEMORY] tenant: 508, limit: 1,073,741,824 hold: 23,621,632 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 6,844,416 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 16,777,216 limit= 9,223,372,036,854,775,807 [2024-09-13 13:02:25.697061] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] [MEMORY] tenant_id= 508 ctx_id= DEFAULT_CTX_ID hold= 6,844,416 used= 5,117,152 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 2,650,112 used= 2,631,360 count= 1 avg_used= 2,631,360 block_cnt= 1 chunk_cnt= 1 mod=RpcStatInfo [MEMORY] hold= 1,720,320 used= 1,682,640 count= 30 avg_used= 56,088 block_cnt= 30 chunk_cnt= 2 mod=[T]ObSessionDIB [MEMORY] hold= 663,552 used= 659,200 count= 1 avg_used= 659,200 block_cnt= 1 chunk_cnt= 1 mod=MulLevelQueue [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=DynamicFactor [MEMORY] hold= 18,240 used= 12,480 count= 30 avg_used= 416 block_cnt= 6 chunk_cnt= 1 mod=OMT_Worker [MEMORY] hold= 15,840 used= 3,840 count= 60 avg_used= 64 block_cnt= 6 chunk_cnt= 1 mod=Coro [MEMORY] hold= 6,848 used= 720 count= 30 avg_used= 24 block_cnt= 6 chunk_cnt= 1 mod=[T]MemoryContex [MEMORY] hold= 1,280 used= 1,080 count= 1 avg_used= 1,080 block_cnt= 1 chunk_cnt= 1 mod=ModuleInitCtx [MEMORY] hold= 5,117,152 used= 5,028,352 count= 154 avg_used= 32,651 mod=SUMMARY [2024-09-13 13:02:25.697103] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=14] [MEMORY] tenant_id= 508 ctx_id= CO_STACK hold= 16,777,216 used= 15,482,880 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 15,482,880 used= 15,454,080 count= 30 avg_used= 515,136 block_cnt= 30 chunk_cnt= 8 mod=CoStack [MEMORY] hold= 15,482,880 used= 15,454,080 count= 30 avg_used= 515,136 mod=SUMMARY [2024-09-13 13:02:25.723177] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.723683] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.723706] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.723712] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.723721] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.723735] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745723734, replica_locations:[]}) [2024-09-13 13:02:25.723790] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=52] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.723817] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.723824] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.723852] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.723914] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553601029, cache_obj->added_lc()=false, cache_obj->get_object_id()=305, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.724952] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.725383] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.725416] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.725427] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.725447] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.725472] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745725470, replica_locations:[]}) [2024-09-13 13:02:25.725558] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=52000, remain_us=539264, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.726525] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=17] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:25.726561] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:25.738636] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=39][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:25.753400] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8A-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745752697) [2024-09-13 13:02:25.753462] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8A-0-0] [lt=53][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203745752697}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:25.753497] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.753524] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745753482) [2024-09-13 13:02:25.753536] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745653234, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.753564] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.753573] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.753579] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745753549) [2024-09-13 13:02:25.777809] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.778392] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.778418] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.778428] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.778486] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=56] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.778509] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745778507, replica_locations:[]}) [2024-09-13 13:02:25.778530] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.778562] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.778576] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.778606] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.778670] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553655782, cache_obj->added_lc()=false, cache_obj->get_object_id()=306, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.779803] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.780199] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.780219] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.780225] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.780233] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.780244] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745780243, replica_locations:[]}) [2024-09-13 13:02:25.780299] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=484523, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.824552] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=22][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:609, tid:19945}]) [2024-09-13 13:02:25.833037] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=63][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.833076] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=37][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:25.833106] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:25.833113] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:25.833120] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=5] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:25.833546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.834055] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.834078] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.834085] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.834093] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.834106] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745834105, replica_locations:[]}) [2024-09-13 13:02:25.834121] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.834142] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:53, local_retry_times:53, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:25.834160] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.834170] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.834179] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:25.834197] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:25.834201] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:25.834228] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:25.834240] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.834291] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553711405, cache_obj->added_lc()=false, cache_obj->get_object_id()=307, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.835272] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.835297] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.835420] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.835738] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.835762] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.835770] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.835780] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.835796] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745835795, replica_locations:[]}) [2024-09-13 13:02:25.835814] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.835827] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.835836] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.835848] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:25.835854] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:25.835861] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:25.835884] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:25.835894] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:25.835900] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:25.835909] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:25.835918] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:25.835924] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:25.835936] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:25.835948] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:25.835954] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:25.835963] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:25.835969] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:25.835979] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:25.835983] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:25.835994] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:25.836000] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:25.836008] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:25.836013] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:25.836021] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:25.836026] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=54, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:25.836044] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=54000, remain_us=428778, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.853568] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:25.853601] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=32][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745853559) [2024-09-13 13:02:25.853611] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745753546, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:25.853634] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.853643] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.853648] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745853618) [2024-09-13 13:02:25.858142] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=43] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:25.858350] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B42-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:25.858365] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B42-0-0] [lt=13][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203745857880], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:25.858848] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD2-0-0] [lt=34][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203745858462, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035345, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203745858082}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:25.858914] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD2-0-0] [lt=64][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.859514] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD2-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:25.864314] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=55] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=6114, clean_start_pos=503316, clean_num=125829) [2024-09-13 13:02:25.873549] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=24] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.873576] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.873922] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:25.875272] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:25.890294] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.890561] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.890584] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.890591] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.890600] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.890616] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745890615, replica_locations:[]}) [2024-09-13 13:02:25.890666] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=47] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.890689] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:54, local_retry_times:54, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:25.890708] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.890717] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.890728] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:25.890736] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:25.890741] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:25.890756] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:25.890768] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.890817] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553767934, cache_obj->added_lc()=false, cache_obj->get_object_id()=308, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.891741] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.891767] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.891929] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.892086] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.892100] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.892105] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.892116] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.892128] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745892127, replica_locations:[]}) [2024-09-13 13:02:25.892140] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.892150] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.892159] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.892170] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:25.892179] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:25.892187] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:25.892200] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:25.892207] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:25.892213] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:25.892218] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:25.892223] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:25.892228] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:25.892235] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:25.892244] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:25.892249] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:25.892256] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:25.892260] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:25.892266] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:25.892271] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:25.892285] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:25.892294] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:25.892303] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:25.892310] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:25.892315] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:25.892324] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=55, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:25.892342] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=55000, remain_us=372480, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.947650] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.947950] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.947973] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.947980] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.947990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.948006] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745948005, replica_locations:[]}) [2024-09-13 13:02:25.948019] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:25.948038] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:55, local_retry_times:55, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:25.948057] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:25.948066] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:25.948075] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:25.948079] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:25.948083] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:25.948116] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:25.948128] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:25.948180] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553825295, cache_obj->added_lc()=false, cache_obj->get_object_id()=309, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:25.949193] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.949229] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.949383] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:25.949544] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.949559] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:25.949564] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:25.949575] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:25.949587] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203745949587, replica_locations:[]}) [2024-09-13 13:02:25.949600] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.949608] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:25.949614] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:25.949625] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:25.949631] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:25.949639] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:25.949653] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:25.949664] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:25.949670] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:25.949679] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:25.949686] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:25.949690] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:25.949699] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:25.949708] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:25.949713] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:25.949717] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:25.949724] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:25.949729] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:25.949737] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:25.949748] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:25.949754] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:25.949759] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:25.949764] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:25.949770] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:25.949778] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=56, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:25.949799] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] will sleep(sleep_us=56000, remain_us=315024, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:25.953251] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8B-0-0] [lt=35][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203745952801) [2024-09-13 13:02:25.953279] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8B-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203745952801}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:25.953309] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.953329] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:25.953342] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203745953294) [2024-09-13 13:02:25.978455] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=24][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.006094] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.006508] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.006532] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.006539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.006548] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.006564] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746006563, replica_locations:[]}) [2024-09-13 13:02:26.006580] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.006601] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:56, local_retry_times:56, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:26.006617] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.006623] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.006632] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.006639] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.006643] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:26.006659] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:26.006670] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.006719] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553883835, cache_obj->added_lc()=false, cache_obj->get_object_id()=310, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.007687] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.007713] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.007865] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.008128] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.008142] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.008148] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.008158] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.008167] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746008166, replica_locations:[]}) [2024-09-13 13:02:26.008180] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.008190] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.008199] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.008211] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:26.008219] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:26.008229] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:26.008247] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:26.008261] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.008269] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.008278] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:26.008285] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:26.008290] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:26.008296] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.008302] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:26.008307] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:26.008310] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:26.008315] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:26.008321] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:26.008328] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:26.008339] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:26.008347] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.008356] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:26.008365] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:26.008373] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:26.008380] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=57, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.008400] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] will sleep(sleep_us=57000, remain_us=256422, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:26.043640] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=13][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.043672] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=30][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=1319166) [2024-09-13 13:02:26.043680] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=7][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:26.043687] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=6][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:26.043693] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20292][T1_L0_G0][T1][YB42AC103326-00062119D7A51A92-0-0] [lt=5][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:26.043827] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=13][errcode=0] server is initiating(server_id=0, local_seq=34, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:26.044722] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=12][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.053328] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8C-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746052870) [2024-09-13 13:02:26.053357] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.053356] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8C-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746052870}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.053377] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746053350) [2024-09-13 13:02:26.053388] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203745853617, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.053412] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.053418] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.053423] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746053397) [2024-09-13 13:02:26.053432] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.053443] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.053452] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746053429) [2024-09-13 13:02:26.064711] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=59] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:26.065702] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.066012] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.066037] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.066044] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.066055] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.066069] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746066068, replica_locations:[]}) [2024-09-13 13:02:26.066105] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=34] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.066125] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:57, local_retry_times:57, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:26.066143] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.066166] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.066177] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.066183] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.066187] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:26.066220] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:26.066232] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.066277] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6553943394, cache_obj->added_lc()=false, cache_obj->get_object_id()=311, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.067217] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.067241] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.067349] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.067582] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.067601] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.067609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.067622] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.067634] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746067633, replica_locations:[]}) [2024-09-13 13:02:26.067647] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.067657] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.067663] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.067671] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:26.067677] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:26.067685] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:26.067699] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:26.067710] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.067718] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.067727] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:26.067731] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:26.067739] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:26.067745] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.067755] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:26.067761] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:26.067767] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:26.067801] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=197021, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:26.075356] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:26.092966] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=27] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.093122] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=55] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.093906] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.093903] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.094463] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.094976] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=23] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.095106] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.095302] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.095363] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=10] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.104552] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=49] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:26.110728] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=23][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.118743] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=31] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:26.126156] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.126234] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:138) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7E-0-0] [lt=4][errcode=-4002] invalid argument, empty server list(ret=-4002) [2024-09-13 13:02:26.126254] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:380) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7E-0-0] [lt=19][errcode=-4002] refresh sever list failed(ret=-4002) [2024-09-13 13:02:26.126260] WDIAG [SHARE] runTimerTask (ob_alive_server_tracer.cpp:255) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7E-0-0] [lt=5][errcode=-4002] refresh alive server list failed(ret=-4002) [2024-09-13 13:02:26.126581] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.126607] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.126618] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.126629] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.126648] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746126646, replica_locations:[]}) [2024-09-13 13:02:26.126667] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.126697] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.126707] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.126733] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.126791] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554003903, cache_obj->added_lc()=false, cache_obj->get_object_id()=312, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.128213] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.128450] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.128469] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.128475] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.128486] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.128499] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746128498, replica_locations:[]}) [2024-09-13 13:02:26.128555] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=136268, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:26.133199] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC7B-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:26.135543] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:26.135565] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=20][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:26.135574] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:26.135581] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:26.135587] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:26.135593] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:26.135602] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=5][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:26.135613] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=9][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:26.135619] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=6][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:26.135625] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:26.135630] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:26.135634] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:26.135638] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:26.135642] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:26.135652] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:26.135656] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=3][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.135662] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.135667] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:26.135671] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=3][errcode=-5019] fail to handle text query(stmt=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server, ret=-5019) [2024-09-13 13:02:26.135676] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=4][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:26.135681] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.135691] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=7][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:26.135710] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=15][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.135720] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.135725] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=5][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:26.135739] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:26.135750] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.135758] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19878][ServerGTimer][T0][YB42AC103323-000621F921960C7E-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, aret=-5019, ret=-5019) [2024-09-13 13:02:26.135771] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:26.135779] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:26.135790] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:26.135799] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203746135337, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:26.135808] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:26.135907] WDIAG [SHARE] refresh (ob_all_server_tracer.cpp:568) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] fail to get servers_info(ret=-5019, ret="OB_TABLE_NOT_EXIST", GCTX.sql_proxy_=0x55a386ae7408) [2024-09-13 13:02:26.135915] WDIAG [SHARE] runTimerTask (ob_all_server_tracer.cpp:626) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] fail to refresh all server map(ret=-5019) [2024-09-13 13:02:26.145988] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=23] PNIO [ratelimit] time: 1726203746145985, bytes: 3094562, bw: 0.056820 MB/s, add_ts: 1005305, add_bytes: 59896 [2024-09-13 13:02:26.153428] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8D-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746152944) [2024-09-13 13:02:26.153474] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8D-0-0] [lt=39][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746152944}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.153492] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.153524] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746153484) [2024-09-13 13:02:26.153534] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746053395, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.153558] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.153566] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.153572] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746153544) [2024-09-13 13:02:26.167155] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=38] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:26.187821] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.188154] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.188176] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.188183] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.188192] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.188206] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746188205, replica_locations:[]}) [2024-09-13 13:02:26.188232] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.188258] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.188264] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.188284] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.188332] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554065449, cache_obj->added_lc()=false, cache_obj->get_object_id()=313, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.189459] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.189684] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=36] PNIO [ratelimit] time: 1726203746189682, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007627, add_bytes: 0 [2024-09-13 13:02:26.190166] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.190184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.190191] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.190198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.190210] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746190209, replica_locations:[]}) [2024-09-13 13:02:26.190268] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=60000, remain_us=74554, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:26.197286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.197678] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.198175] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.199513] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.201171] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.202317] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.204845] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.206002] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.209574] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.210614] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.211246] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=32] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:26.215240] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.216323] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.221967] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.222972] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.226598] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:26.226637] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=17] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:26.227422] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=11] ====== check clog disk timer task ====== [2024-09-13 13:02:26.227464] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=40] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:26.227479] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:26.228904] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=14] gc stale ls task succ [2024-09-13 13:02:26.229464] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.230388] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.233370] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=20] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:26.235269] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=35, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:26.236327] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.236402] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=15] table not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, table_name.ptr()="data_size:16, data:5F5F616C6C5F6D657267655F696E666F", ret=-5019) [2024-09-13 13:02:26.236426] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-09-13 13:02:26.236463] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=35][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_merge_info, db_name=oceanbase) [2024-09-13 13:02:26.236477] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=13][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_merge_info) [2024-09-13 13:02:26.236490] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:26.236497] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:26.236506] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_merge_info' doesn't exist [2024-09-13 13:02:26.236517] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:26.236528] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:26.236538] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=9][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:26.236548] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=9][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:26.236559] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:26.236570] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:26.236580] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:26.236601] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=13][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:26.236612] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=9][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.236620] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.236623] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.236629] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=8][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:26.236639] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=9][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_merge_info WHERE tenant_id = '1', ret=-5019) [2024-09-13 13:02:26.236651] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:26.236659] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.236675] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:26.236695] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=16][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.236705] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.236715] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:26.236738] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:26.236754] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.236765] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C7F-0-0] [lt=10][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, aret=-5019, ret=-5019) [2024-09-13 13:02:26.236777] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:26.236789] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:26.236800] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:26.236812] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203746236224, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:26.236822] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:26.236829] WDIAG [SHARE] load_global_merge_info (ob_global_merge_table_operator.cpp:49) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, meta_tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:26.236907] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_disk_io_calibration, table_name.ptr()="data_size:25, data:5F5F616C6C5F6469736B5F696F5F63616C6962726174696F6E", ret=-5019) [2024-09-13 13:02:26.236926] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=17][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_disk_io_calibration, ret=-5019) [2024-09-13 13:02:26.236936] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=9][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_disk_io_calibration, db_name=oceanbase) [2024-09-13 13:02:26.236946] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_disk_io_calibration) [2024-09-13 13:02:26.236953] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:26.236957] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:26.236912] WDIAG [STORAGE] refresh_merge_info (ob_tenant_freeze_info_mgr.cpp:890) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-5019] failed to load global merge info(ret=-5019, ret="OB_TABLE_NOT_EXIST", global_merge_info={tenant_id:1, cluster:{name:"cluster", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, frozen_scn:{name:"frozen_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, global_broadcast_scn:{name:"global_broadcast_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, last_merged_scn:{name:"last_merged_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, is_merge_error:{name:"is_merge_error", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_status:{name:"merge_status", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, error_type:{name:"error_type", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, suspend_merging:{name:"suspend_merging", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_start_time:{name:"merge_start_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, last_merged_time:{name:"last_merged_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}}) [2024-09-13 13:02:26.236963] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_disk_io_calibration' doesn't exist [2024-09-13 13:02:26.236968] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:26.236965] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1005) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=52][errcode=-5019] fail to refresh merge info(tmp_ret=-5019, tmp_ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.236972] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:26.236976] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:26.236979] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:26.236984] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:26.236988] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:26.236985] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=36, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:26.236991] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:26.237000] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:26.237004] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.237009] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.237014] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:26.237018] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA", ret=-5019) [2024-09-13 13:02:26.237023] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=4][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:26.237027] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA""}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.237036] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=6][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:26.237045] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.237048] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.237052] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:26.237060] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=3][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA""}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:26.237065] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C7F-0-0] [lt=5][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.237071] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19878][ServerGTimer][T0][YB42AC103323-000621F921960C7F-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA""}, aret=-5019, ret=-5019) [2024-09-13 13:02:26.237079] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:26.237084] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:26.237090] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:26.237095] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203746236750, sql=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:26.237104] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:26.237108] WDIAG [COMMON] parse_calibration_table (ob_io_calibration.cpp:829) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] query failed(ret=-5019, sql_string=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:26.237206] WDIAG [COMMON] read_from_table (ob_io_calibration.cpp:699) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] parse calibration data failed(ret=-5019) [2024-09-13 13:02:26.237223] WDIAG [SERVER] refresh_io_calibration (ob_server.cpp:3477) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] fail to refresh io calibration from table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.237229] WDIAG [SERVER] runTimerTask (ob_server.cpp:3467) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] ObRefreshIOCalibrationTimeTask task failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.237261] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.237352] INFO [SERVER] refresh_cpu_frequency (ob_server.cpp:3395) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=3] Cpu frequency changed(from=2500000, to=2294608) [2024-09-13 13:02:26.237507] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:26.237518] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:26.237523] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:26.237532] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:26.237749] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.237821] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.238047] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.238652] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.238737] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:26.238865] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.239081] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.239095] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.239101] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.239109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.239122] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746239120, replica_locations:[]}) [2024-09-13 13:02:26.239170] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1997803, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.239251] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.239411] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.239452] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.239466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.239480] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.239492] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746239491, replica_locations:[]}) [2024-09-13 13:02:26.239510] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.239536] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.239546] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.239574] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.239618] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554116735, cache_obj->added_lc()=false, cache_obj->get_object_id()=315, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.240766] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.240970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.241000] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.241009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.241024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.241037] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746241037, replica_locations:[]}) [2024-09-13 13:02:26.241095] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1995879, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.241962] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.242177] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.242200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.242210] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.242221] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.242252] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=37, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:26.242281] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.242951] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.242979] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.242990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.243002] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.243017] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746243015, replica_locations:[]}) [2024-09-13 13:02:26.243056] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=38] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.243085] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.243097] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.243129] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.243183] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554120300, cache_obj->added_lc()=false, cache_obj->get_object_id()=316, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.243597] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:26.243618] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=19][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:26.243629] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=11][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:26.243644] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=15][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:26.243671] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=24][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:26.243678] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:26.243688] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:26.243695] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:26.243706] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=10][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:26.243712] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:26.243722] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=9][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:26.243729] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:26.243739] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=9][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:26.243745] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:26.243759] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=11][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:26.243766] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.243774] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.243784] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:26.243792] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:26.243799] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:26.243810] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=10][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.243823] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:26.243839] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=14][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.243847] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.243854] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:26.243866] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:26.243898] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=31][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.243906] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:26.243922] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=16][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:26.243931] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:26.243938] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:26.243946] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203746243475, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:26.243955] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:26.243967] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=10][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:26.244028] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=11][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:26.244041] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=11][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:26.244049] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=8][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.244060] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=10][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.244069] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=8][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:26.244083] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=12][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:26.244090] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C84-0-0] [lt=6][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.244126] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.244329] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.244345] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.244351] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.244358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.244369] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746244368, replica_locations:[]}) [2024-09-13 13:02:26.244419] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1992554, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.246593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.246799] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.246812] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.246817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.246824] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.246831] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746246830, replica_locations:[]}) [2024-09-13 13:02:26.246840] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.246911] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.246921] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.246939] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.246979] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554124099, cache_obj->added_lc()=false, cache_obj->get_object_id()=317, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.247096] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.247700] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.247913] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.247929] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.247935] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.247942] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.247950] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746247949, replica_locations:[]}) [2024-09-13 13:02:26.247987] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1988987, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.248135] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.250632] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.250887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.250904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.250910] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.250917] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.250926] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746250925, replica_locations:[]}) [2024-09-13 13:02:26.250938] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.250962] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.250973] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.251008] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.251064] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554128170, cache_obj->added_lc()=false, cache_obj->get_object_id()=314, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.251398] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.251647] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.251666] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.251687] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.251695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.251704] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746251704, replica_locations:[]}) [2024-09-13 13:02:26.251714] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.251729] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.251737] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.251757] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.251787] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554128907, cache_obj->added_lc()=false, cache_obj->get_object_id()=318, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.251899] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.252102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.252123] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.252133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.252146] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.252157] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746252156, replica_locations:[]}) [2024-09-13 13:02:26.252205] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1] will sleep(sleep_us=12618, remain_us=12618, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203746264822) [2024-09-13 13:02:26.252571] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.252775] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.252791] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.252808] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.252816] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.252825] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746252824, replica_locations:[]}) [2024-09-13 13:02:26.252862] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1984111, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.253510] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8E-0-0] [lt=31][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746253013) [2024-09-13 13:02:26.253539] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8E-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746253013}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.253553] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.253570] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746253547) [2024-09-13 13:02:26.253585] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746153542, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.253600] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:26.253612] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:26.253634] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.253642] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.253647] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746253624) [2024-09-13 13:02:26.257041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.257303] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.257322] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.257328] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.257336] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.257348] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746257348, replica_locations:[]}) [2024-09-13 13:02:26.257361] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.257392] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.257401] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.257416] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.257459] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554134579, cache_obj->added_lc()=false, cache_obj->get_object_id()=320, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.257576] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.258224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.258423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.258457] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.258463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.258470] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.258479] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746258479, replica_locations:[]}) [2024-09-13 13:02:26.258520] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1978454, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.258763] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.263742] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.264015] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.264034] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.264041] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.264049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.264066] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746264065, replica_locations:[]}) [2024-09-13 13:02:26.264082] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.264104] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.264112] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.264149] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.264193] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554141311, cache_obj->added_lc()=false, cache_obj->get_object_id()=321, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.265077] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=15][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203746264824, ctx_timeout_ts=1726203746264824, worker_timeout_ts=1726203746264822, default_timeout=1000000) [2024-09-13 13:02:26.265095] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=17][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:26.265102] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:26.265111] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.265120] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:26.265132] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.265128] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.265138] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.265128] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=74] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:26.265154] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.265190] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554142305, cache_obj->added_lc()=false, cache_obj->get_object_id()=319, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.265903] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203746264822, ctx_timeout_ts=1726203746264822, worker_timeout_ts=1726203746264822, default_timeout=1000000) [2024-09-13 13:02:26.265920] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:26.265926] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:26.265934] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.265939] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.265945] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=11][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.265960] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=14][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:26.265964] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.265975] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.265986] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.265996] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:26.266003] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746266003, replica_locations:[]}) [2024-09-13 13:02:26.266014] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.266021] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.266044] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=7] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:26.266063] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=1][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:26.266085] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1970888, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.266089] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:26.266099] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.266109] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=7] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=1999558) [2024-09-13 13:02:26.266120] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=10][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:26.266129] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.266136] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:26.266142] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:26.266149] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:26.266160] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.266222] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554143315, cache_obj->added_lc()=false, cache_obj->get_object_id()=323, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.266286] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:26.266299] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:26.266307] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=7][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:26.266314] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=6][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:26.266327] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:26.266336] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=8][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:26.266345] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=10] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001525) [2024-09-13 13:02:26.266355] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=9][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:26.266365] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=10] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001561) [2024-09-13 13:02:26.266378] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=12][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:26.266389] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=10] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:26.266395] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7D-0-0] [lt=5][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:26.266402] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19945][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:26.266412] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19945][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:26.266454] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=5] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:26.266472] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=15] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:26.267844] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:26.267974] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.268200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.268214] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.268220] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.268226] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.268234] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746268234, replica_locations:[]}) [2024-09-13 13:02:26.268276] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1998205, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.268333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.268519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.268529] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.268534] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.268539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.268546] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746268545, replica_locations:[]}) [2024-09-13 13:02:26.268554] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.268566] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.268576] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.268597] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.268628] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554145747, cache_obj->added_lc()=false, cache_obj->get_object_id()=324, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.269223] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.269282] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.269473] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.269494] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.269506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.269517] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.269524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.269531] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746269531, replica_locations:[]}) [2024-09-13 13:02:26.269564] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1996916, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.269786] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.269817] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.269836] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.269844] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.269850] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.269857] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:26.269863] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:26.269868] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:26.269930] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.270112] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270149] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.270155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.270163] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746270162, replica_locations:[]}) [2024-09-13 13:02:26.270173] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.270180] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:26.270190] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:26.270363] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:26.270371] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] [2024-09-13 13:02:26.270501] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.270673] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270685] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270690] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.270696] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.270701] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.270714] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:26.270720] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:26.270724] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:26.270755] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.270795] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.270933] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270950] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.270955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.270943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270960] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.270966] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:26.270962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.270970] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:26.270972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.270981] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:26.270984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.270996] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746270995, replica_locations:[]}) [2024-09-13 13:02:26.271008] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.271029] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.271037] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.271056] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.271066] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.271091] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554148208, cache_obj->added_lc()=false, cache_obj->get_object_id()=325, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.271205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.271216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.271221] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.271227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.271232] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.271237] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:26.271242] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:26.271245] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:26.271250] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:26.271255] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:26.271267] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:26.272004] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.272284] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.272341] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.272359] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.272365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.272375] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.272399] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746272398, replica_locations:[]}) [2024-09-13 13:02:26.272450] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1994030, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.272552] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.272580] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.272588] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.272596] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.272623] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746272623, replica_locations:[]}) [2024-09-13 13:02:26.272636] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.272655] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.272663] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.272681] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.272715] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554149834, cache_obj->added_lc()=false, cache_obj->get_object_id()=322, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.273467] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.273702] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.273743] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.273754] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.273769] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.273798] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746273798, replica_locations:[]}) [2024-09-13 13:02:26.273850] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1963123, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.274623] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.274941] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.274961] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.274970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.274977] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.274985] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746274984, replica_locations:[]}) [2024-09-13 13:02:26.274997] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.275012] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.275019] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.275045] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.275079] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554152197, cache_obj->added_lc()=false, cache_obj->get_object_id()=326, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.275451] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:26.275763] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.276021] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.276037] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.276044] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.276052] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.276062] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746276062, replica_locations:[]}) [2024-09-13 13:02:26.276107] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1990374, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.279309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.279530] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.279547] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.279553] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.279560] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.279569] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746279568, replica_locations:[]}) [2024-09-13 13:02:26.279582] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.279602] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.279607] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.279621] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.279648] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554156768, cache_obj->added_lc()=false, cache_obj->get_object_id()=328, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.280380] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.280616] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.280637] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.280647] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.280660] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.280675] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746280674, replica_locations:[]}) [2024-09-13 13:02:26.280726] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1985754, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.281092] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.281252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.281270] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.281276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.281284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.281292] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746281292, replica_locations:[]}) [2024-09-13 13:02:26.281305] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.281322] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.281327] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.281358] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.281392] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554158511, cache_obj->added_lc()=false, cache_obj->get_object_id()=327, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.281617] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.282162] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.282373] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.282388] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.282394] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.282401] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.282408] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746282408, replica_locations:[]}) [2024-09-13 13:02:26.282510] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=8000, remain_us=1954464, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.282695] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.284940] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.285136] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.285153] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.285159] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.285166] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.285174] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746285174, replica_locations:[]}) [2024-09-13 13:02:26.285187] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.285204] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.285212] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.285235] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.285273] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554162392, cache_obj->added_lc()=false, cache_obj->get_object_id()=329, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.286012] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.286236] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.286258] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.286266] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.286276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.286290] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746286289, replica_locations:[]}) [2024-09-13 13:02:26.286328] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=5000, remain_us=1980152, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.290703] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.290978] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.290996] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.291023] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.291031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.291043] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746291042, replica_locations:[]}) [2024-09-13 13:02:26.291053] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.291073] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.291081] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.291104] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.291140] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554168259, cache_obj->added_lc()=false, cache_obj->get_object_id()=330, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.291522] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.291748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.291771] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.291782] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.291793] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.291807] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746291807, replica_locations:[]}) [2024-09-13 13:02:26.291829] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.291855] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.291862] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.291910] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.291949] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.291993] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554169108, cache_obj->added_lc()=false, cache_obj->get_object_id()=331, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.292144] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.292160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.292179] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.292189] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.292200] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746292200, replica_locations:[]}) [2024-09-13 13:02:26.292236] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1944737, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.292951] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.293175] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.293192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.293198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.293205] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.293213] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746293213, replica_locations:[]}) [2024-09-13 13:02:26.293255] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1973226, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.295147] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.296300] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.299473] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.299785] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.299803] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.299810] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.299818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.299828] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746299828, replica_locations:[]}) [2024-09-13 13:02:26.299843] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.299863] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.299868] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.299902] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.299937] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554177056, cache_obj->added_lc()=false, cache_obj->get_object_id()=333, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.300743] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.300955] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.300972] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.300978] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.300985] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.300993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746300993, replica_locations:[]}) [2024-09-13 13:02:26.301036] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1965444, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.301404] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.301603] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.301627] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.301637] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.301648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.301666] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746301665, replica_locations:[]}) [2024-09-13 13:02:26.301687] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.301729] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.301742] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.301770] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.301818] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554178934, cache_obj->added_lc()=false, cache_obj->get_object_id()=332, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.303022] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.303269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.303293] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.303301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.303310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.303321] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746303321, replica_locations:[]}) [2024-09-13 13:02:26.303360] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1933614, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.308238] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.308583] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.308608] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.308619] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.308634] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.308648] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746308647, replica_locations:[]}) [2024-09-13 13:02:26.308669] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.308693] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.308703] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.308725] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.308772] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554185887, cache_obj->added_lc()=false, cache_obj->get_object_id()=334, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.309638] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.309730] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.309898] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.309914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.309921] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.309928] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.309936] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746309935, replica_locations:[]}) [2024-09-13 13:02:26.309978] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1956503, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.310821] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.313564] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.313799] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.313819] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.313825] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.313834] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.313846] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746313845, replica_locations:[]}) [2024-09-13 13:02:26.313857] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.313886] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.313891] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.313922] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.313961] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554191079, cache_obj->added_lc()=false, cache_obj->get_object_id()=335, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.314857] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.315071] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.315088] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.315094] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.315101] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.315112] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746315111, replica_locations:[]}) [2024-09-13 13:02:26.315155] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1921818, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.318142] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.318408] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.318426] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.318433] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.318451] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.318462] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746318461, replica_locations:[]}) [2024-09-13 13:02:26.318475] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.318495] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.318503] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.318526] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.318562] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554195681, cache_obj->added_lc()=false, cache_obj->get_object_id()=336, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.319333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.319593] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.319610] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.319617] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.319624] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.319632] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746319631, replica_locations:[]}) [2024-09-13 13:02:26.319685] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1946796, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.320974] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [20019][Blacklist][T0][Y0-0000000000000000-0-0] [lt=19] blacklist_loop exec finished(cost_time=17, is_enabled=true, send_cnt=0) [2024-09-13 13:02:26.321055] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=36] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.324768] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.325316] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.326339] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.326570] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.326782] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.326801] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.326811] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.326843] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=30] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.326859] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746326858, replica_locations:[]}) [2024-09-13 13:02:26.326893] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=31] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.326918] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.326925] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.326950] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.327008] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554204125, cache_obj->added_lc()=false, cache_obj->get_object_id()=337, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.328216] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.328448] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.328468] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.328477] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.328500] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.328509] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746328509, replica_locations:[]}) [2024-09-13 13:02:26.328559] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1908415, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.328882] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.329229] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.329247] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.329253] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.329260] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.329272] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746329271, replica_locations:[]}) [2024-09-13 13:02:26.329291] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.329316] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.329324] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.329346] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.329396] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554206511, cache_obj->added_lc()=false, cache_obj->get_object_id()=338, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.330316] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.330554] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.330572] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.330579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.330586] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.330595] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746330594, replica_locations:[]}) [2024-09-13 13:02:26.330646] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1935835, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.333748] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.333769] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:26.333827] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=36][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:26.333836] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:26.333843] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=5] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:26.333851] WDIAG [STORAGE.TRANS] operator() (ob_ts_mgr.h:175) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4721] refresh gts failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:26.333858] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=6] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:26.333851] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CA7-0-0] [lt=28][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203746333806}) [2024-09-13 13:02:26.340804] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.340962] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.341066] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.341092] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.341102] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.341111] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.341123] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746341122, replica_locations:[]}) [2024-09-13 13:02:26.341141] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.341180] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.341189] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.341196] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.341217] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.341219] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.341229] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.341248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.341258] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554218375, cache_obj->added_lc()=false, cache_obj->get_object_id()=339, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.341266] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746341265, replica_locations:[]}) [2024-09-13 13:02:26.341281] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.341308] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.341319] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.341350] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.341401] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554218516, cache_obj->added_lc()=false, cache_obj->get_object_id()=340, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.341417] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.342123] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.342362] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.342408] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.342733] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.342753] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.342762] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.342776] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.342789] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746342788, replica_locations:[]}) [2024-09-13 13:02:26.342844] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1894130, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.342859] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.342873] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.342889] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.342903] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.342919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746342918, replica_locations:[]}) [2024-09-13 13:02:26.342973] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1923508, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.343381] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.348598] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=27] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:26.353552] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8F-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746353090) [2024-09-13 13:02:26.353588] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A8F-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746353090}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.353625] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.353643] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746353618) [2024-09-13 13:02:26.353654] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746253598, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.353682] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.353695] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.353702] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746353666) [2024-09-13 13:02:26.354167] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.354495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.354512] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.354517] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.354524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.354535] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746354535, replica_locations:[]}) [2024-09-13 13:02:26.354550] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.354571] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.354577] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.354609] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.354652] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554231769, cache_obj->added_lc()=false, cache_obj->get_object_id()=342, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.355604] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.355929] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.355948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.355955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.355962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.355971] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746355971, replica_locations:[]}) [2024-09-13 13:02:26.356022] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1910458, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.356098] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.356269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=62][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.356292] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.356302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.356313] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.356330] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746356330, replica_locations:[]}) [2024-09-13 13:02:26.356350] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.356376] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.356388] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.356419] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.356473] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554233591, cache_obj->added_lc()=false, cache_obj->get_object_id()=341, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.357576] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.357738] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.357758] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.357765] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.357773] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.357782] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746357782, replica_locations:[]}) [2024-09-13 13:02:26.357866] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1879108, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.358831] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B43-0-0] [lt=20] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:26.358851] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B43-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203746358349], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:26.359457] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD3-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:26.359858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.360409] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD3-0-0] [lt=12][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203746360081, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035365, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203746359292}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:26.360433] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD3-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:26.360951] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.367358] INFO [STORAGE] runTimerTask (ob_tenant_memory_printer.cpp:32) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7] === Run print tenant memory usage task === [2024-09-13 13:02:26.367411] INFO [STORAGE] print_tenant_usage (ob_tenant_memory_printer.cpp:102) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=15] ====== tenants memory info ====== === TENANTS MEMORY INFO === divisive_memory_used= 48,541,696 [TENANT_MEMORY] tenant_id= 500 mem_tenant_limit= 9,223,372,036,854,775,807 mem_tenant_hold= 540,209,152 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 508 mem_tenant_limit= 1,073,741,824 mem_tenant_hold= 23,621,632 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 1 now= 1,726,203,746,367,058 active_memstore_used= 0 total_memstore_used= 0 total_memstore_hold= 0 memstore_freeze_trigger_limit= 257,698,020 memstore_limit= 1,288,490,160 mem_tenant_limit= 3,221,225,472 mem_tenant_hold= 355,610,624 max_mem_memstore_can_get_now= 0 memstore_alloc_pos= 0 memstore_frozen_pos= 0 memstore_reclaimed_pos= 0 [2024-09-13 13:02:26.367602] INFO [STORAGE] print_tenant_usage (ob_tenant_memory_printer.cpp:114) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12] [CHUNK_MGR] limit= 17,179,869,184 hold= 921,538,560 total_hold= 985,661,440 used= 919,441,408 freelists_hold= 2,097,152 total_maps= 293 total_unmaps= 3 large_maps= 39 large_unmaps= 0 huge_maps= 6 huge_unmaps= 3 memalign=0 resident_size= 939,180,032 virtual_memory_used= 1,832,943,616 [CHUNK_MGR] 2 MB_CACHE: hold= 2,097,152 free= 1 pushes= 1,453 pops= 1,452 maps= 248 unmaps= 0 [CHUNK_MGR] 4 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 19 unmaps= 0 [CHUNK_MGR] 6 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 11 unmaps= 0 [CHUNK_MGR] 8 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 2 unmaps= 0 [CHUNK_MGR] 10 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 4 unmaps= 0 [CHUNK_MGR] 12 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 0 unmaps= 0 [CHUNK_MGR] 14 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 1 unmaps= 0 [CHUNK_MGR] 16 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 0 unmaps= 0 [CHUNK_MGR] 18 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 2 unmaps= 0 [CHUNK_MGR] 20 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 0 unmaps= 0 [2024-09-13 13:02:26.367639] INFO print (ob_malloc_time_monitor.cpp:39) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9] [MALLOC_TIME_MONITOR] show the distribution of ob_malloc's cost_time [MALLOC_TIME_MONITOR] [ 0, 10): delta_total_cost_time= 1909034, delta_count= 15544333, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 10, 100): delta_total_cost_time= 54588, delta_count= 3234, avg_cost_time= 16 [MALLOC_TIME_MONITOR] [ 100, 1000): delta_total_cost_time= 17733, delta_count= 63, avg_cost_time= 281 [MALLOC_TIME_MONITOR] [ 1000, 10000): delta_total_cost_time= 5050, delta_count= 3, avg_cost_time= 1683 [MALLOC_TIME_MONITOR] [ 10000, 100000): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 100000, 1000000): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 1000000, 9223372036854775807): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [2024-09-13 13:02:26.368225] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.368558] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.368576] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.368583] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.368594] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.368604] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746368603, replica_locations:[]}) [2024-09-13 13:02:26.368618] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.368640] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.368649] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.368667] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.368714] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554245827, cache_obj->added_lc()=false, cache_obj->get_object_id()=343, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.369606] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_unit, table_name.ptr()="data_size:10, data:5F5F616C6C5F756E6974", ret=-5019) [2024-09-13 13:02:26.369640] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=32][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_unit, ret=-5019) [2024-09-13 13:02:26.369651] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_unit, db_name=oceanbase) [2024-09-13 13:02:26.369673] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=21][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_unit) [2024-09-13 13:02:26.369674] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.369683] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=8][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:26.369691] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:26.369700] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_unit' doesn't exist [2024-09-13 13:02:26.369707] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:26.369714] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:26.369721] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:26.369731] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=9][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:26.369738] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:26.369745] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:26.369752] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:26.369774] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=17][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:26.369785] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.369793] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.369804] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:26.369811] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-09-13 13:02:26.369822] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:26.369829] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.369849] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=16][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:26.369868] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=16][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.369886] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=18][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:26.369896] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=9][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:26.369910] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:26.369932] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=21][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.369927] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.369941] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.369941] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, aret=-5019, ret=-5019) [2024-09-13 13:02:26.369947] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.369950] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-09-13 13:02:26.369957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.369959] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=8][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:26.369966] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:26.369966] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746369965, replica_locations:[]}) [2024-09-13 13:02:26.369974] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203746369291, sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-09-13 13:02:26.369988] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:26.369998] WDIAG [SHARE] read_units (ob_unit_table_operator.cpp:1150) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] execute sql failed(sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-09-13 13:02:26.370007] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1896473, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.370061] WDIAG [SHARE] get_units_by_tenant (ob_unit_table_operator.cpp:840) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=11][errcode=-5019] read_units failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)) [2024-09-13 13:02:26.370079] WDIAG [SHARE] get_sys_unit_count (ob_unit_table_operator.cpp:67) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=15][errcode=-5019] failed to get units by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.370100] WDIAG [SHARE] get_sys_unit_count (ob_unit_getter.cpp:436) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=17][errcode=-5019] ut_operator get sys unit count failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.370112] WDIAG [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:95) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] get sys unit count fail(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.370120] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:109) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=7] refresh tenant units(sys_unit_cnt=0, units=[], ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:26.371321] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, table_name.ptr()="data_size:12, data:5F5F616C6C5F74656E616E74", ret=-5019) [2024-09-13 13:02:26.371345] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=22][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, ret=-5019) [2024-09-13 13:02:26.371361] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=15][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_tenant, db_name=oceanbase) [2024-09-13 13:02:26.371371] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_tenant) [2024-09-13 13:02:26.371379] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:26.371389] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:26.371398] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] Table 'oceanbase.__all_tenant' doesn't exist [2024-09-13 13:02:26.371408] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:26.371427] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=19][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:26.371442] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:26.371449] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:26.371460] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:26.371466] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:26.371476] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:26.371487] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:26.371498] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=11][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.371506] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.371516] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=10][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:26.371523] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT tenant_id FROM __all_tenant, ret=-5019) [2024-09-13 13:02:26.371533] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=9][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:26.371540] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT tenant_id FROM __all_tenant"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.371563] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=20][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:26.371579] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=14][errcode=-5019] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:26.371593] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.371668] INFO [SERVER] cal_all_part_disk_default_percentage (ob_server_utils.cpp:301) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=1] cal_all_part_disk_default_percentage succ(data_dir="/data1/oceanbase/data/sstable", clog_dir="/data1/oceanbase/data/clog", shared_mode=true, data_disk_total_size=300808052736, data_disk_default_percentage=60, clog_disk_total_size=300808052736, clog_disk_default_percentage=30) [2024-09-13 13:02:26.371689] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:337) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=20] decide disk size finished(suggested_disk_size=21474836480, suggested_disk_percentage=0, default_disk_percentage=30, total_space=300808052736, disk_size=21474836480) [2024-09-13 13:02:26.371698] INFO [SERVER] get_log_disk_info_in_config (ob_server_utils.cpp:88) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=9] get_log_disk_info_in_config(suggested_data_disk_size=21474836480, suggested_clog_disk_size=21474836480, suggested_data_disk_percentage=0, suggested_clog_disk_percentage=0, log_disk_size=21474836480, log_disk_percentage=0, total_log_disk_size=300808052736) [2024-09-13 13:02:26.371715] INFO [CLOG] try_resize (ob_server_log_block_mgr.cpp:800) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=14] try_resize success(ret=0, log_disk_size=21474836480, total_log_disk_size=300808052736, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:26.371734] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:133) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7E-0-0] [lt=19] refresh tenant config(tenants=[], ret=-5019) [2024-09-13 13:02:26.372108] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.372302] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.372319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.372325] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.372331] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.372340] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746372340, replica_locations:[]}) [2024-09-13 13:02:26.372366] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.372403] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.372415] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.372454] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.372502] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554249618, cache_obj->added_lc()=false, cache_obj->get_object_id()=344, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.374546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.374746] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.374769] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.374778] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.374793] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.374806] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746374805, replica_locations:[]}) [2024-09-13 13:02:26.374897] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=15000, remain_us=1862077, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.377848] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:2420) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=4] dump tenant info(tenant={id:1, tenant_meta:{unit:{tenant_id:1, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"hidden_sys_unit", resource:{min_cpu:2, max_cpu:2, memory_size:"3GB", log_disk_size:"0GB", min_iops:9223372036854775807, max_iops:9223372036854775807, iops_weight:2}}, mode:0, create_timestamp:1726203737966288, is_removed:false}, super_block:{tenant_id:1, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:1}, unit_min_cpu:"2.000000000000000000e+00", unit_max_cpu:"2.000000000000000000e+00", total_worker_cnt:25, min_worker_cnt:10, max_worker_cnt:150, stopped:0, worker_us:77964753, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:73, recv_lp_rpc_cnt:0, recv_mysql_cnt:2, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:928, workers:10, nesting workers:8, req_queue:total_size=1 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=1 queue[5]=0 , multi_level_queue:total_size=6 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=6 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=37 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:{group_id:10, queue_size:0, recv_req_cnt:8, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:2, nesting_worker_cnt:0, token_change:1726203739127015}{group_id:5, queue_size:0, recv_req_cnt:17, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:2, nesting_worker_cnt:0, token_change:1726203738351773}{group_id:19, queue_size:0, recv_req_cnt:1, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:1, nesting_worker_cnt:0, token_change:1726203741946044}{group_id:9, queue_size:0, recv_req_cnt:1122, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:2, nesting_worker_cnt:0, token_change:1726203738260543}, rpc_stat_info: pcode=0x14a:cnt=1122 pcode=0x717:cnt=72 pcode=0x51c:cnt=20 pcode=0x710:cnt=17 pcode=0x523:cnt=12, token_change_ts:1726203738244760, tenant_role:1}) [2024-09-13 13:02:26.378401] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.378418] INFO [SERVER.OMT] print_throttled_time (ob_tenant.cpp:1666) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=568] dump throttled time info(id_=1, throttled_time_log=group_id: 10, group: OBCG_LOC_CACHE, throttled_time: 0;group_id: 5, group: OBCG_ID_SERVICE, throttled_time: 0;group_id: 19, group: OBCG_STORAGE, throttled_time: 0;group_id: 9, group: OBCG_DETECT_RS, throttled_time: 0;tenant_id: 1, tenant_throttled_time: 0;) [2024-09-13 13:02:26.378452] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:2420) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=33] dump tenant info(tenant={id:508, tenant_meta:{unit:{tenant_id:508, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:5, max_cpu:5, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1726203736354211, is_removed:false}, super_block:{tenant_id:508, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:1}, unit_min_cpu:"5.000000000000000000e+00", unit_max_cpu:"5.000000000000000000e+00", total_worker_cnt:30, min_worker_cnt:22, max_worker_cnt:150, stopped:0, worker_us:1496757, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:19894, workers:22, nesting workers:8, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:, token_change_ts:1726203736360714, tenant_role:0}) [2024-09-13 13:02:26.378986] INFO [SERVER.OMT] print_throttled_time (ob_tenant.cpp:1666) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=533] dump throttled time info(id_=508, throttled_time_log=tenant_id: 508, tenant_throttled_time: 0;) [2024-09-13 13:02:26.379529] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.383206] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.383514] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.383556] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.383570] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.383584] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.383601] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746383599, replica_locations:[]}) [2024-09-13 13:02:26.383622] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.383649] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.383663] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.383696] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.383754] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554260868, cache_obj->added_lc()=false, cache_obj->get_object_id()=345, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.384992] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.385283] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.385308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.385319] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.385334] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.385351] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746385350, replica_locations:[]}) [2024-09-13 13:02:26.385416] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1881064, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.386457] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=19] ====== tenant freeze timer task ====== [2024-09-13 13:02:26.386487] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=18][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:26.390109] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.390387] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.390405] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.390411] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.390419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.390433] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746390432, replica_locations:[]}) [2024-09-13 13:02:26.390461] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.390481] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.390488] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.390507] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.390566] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554267684, cache_obj->added_lc()=false, cache_obj->get_object_id()=346, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.391639] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.391848] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.391869] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.391907] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=36] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.391915] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.391929] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746391928, replica_locations:[]}) [2024-09-13 13:02:26.391975] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1844999, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.398060] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.399607] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.399810] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.399905] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.399941] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.399951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.399966] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.399980] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746399979, replica_locations:[]}) [2024-09-13 13:02:26.400000] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.400024] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.400036] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.400059] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.400113] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554277226, cache_obj->added_lc()=false, cache_obj->get_object_id()=347, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.401170] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.401415] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.401452] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.401462] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.401470] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.401479] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746401478, replica_locations:[]}) [2024-09-13 13:02:26.401525] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1864955, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.408191] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.408481] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.408506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.408528] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.408540] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.408551] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746408550, replica_locations:[]}) [2024-09-13 13:02:26.408565] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.408587] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.408596] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.408623] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.408663] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554285781, cache_obj->added_lc()=false, cache_obj->get_object_id()=348, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.409906] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.410063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.410080] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.410095] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.410103] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.410111] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746410110, replica_locations:[]}) [2024-09-13 13:02:26.410154] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1826820, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.416722] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.417052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.417076] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.417083] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.417091] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.417104] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746417103, replica_locations:[]}) [2024-09-13 13:02:26.417116] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.417138] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.417147] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.417170] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.417214] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554294331, cache_obj->added_lc()=false, cache_obj->get_object_id()=349, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.418232] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.418500] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.418525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.418536] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.418549] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.418558] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746418557, replica_locations:[]}) [2024-09-13 13:02:26.418605] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1847875, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.419310] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.420683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.427416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.427694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.427717] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.427726] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.427736] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.427749] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746427748, replica_locations:[]}) [2024-09-13 13:02:26.427761] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.427791] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.427802] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.427836] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.427886] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554304996, cache_obj->added_lc()=false, cache_obj->get_object_id()=350, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.428958] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=43][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.429200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.429221] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.429231] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.429245] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746429244, replica_locations:[]}) [2024-09-13 13:02:26.429295] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1807678, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.429913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.430314] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169005D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.430496] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.431751] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.433369] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.434451] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.435009] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.435237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.435256] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.435267] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746435266, replica_locations:[]}) [2024-09-13 13:02:26.435285] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.435327] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.435339] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.435358] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.435404] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554312521, cache_obj->added_lc()=false, cache_obj->get_object_id()=351, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.436789] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.437025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.437043] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.437052] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746437051, replica_locations:[]}) [2024-09-13 13:02:26.437056] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.437114] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=17000, remain_us=1829366, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.438115] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.441188] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.441916] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.442426] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.442946] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.447524] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.447566] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.447734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.447759] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.447772] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746447771, replica_locations:[]}) [2024-09-13 13:02:26.447815] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=41] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.447843] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.447853] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.447891] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.447947] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554325052, cache_obj->added_lc()=false, cache_obj->get_object_id()=352, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.448542] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.449019] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.449227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.449243] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.449253] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746449252, replica_locations:[]}) [2024-09-13 13:02:26.449307] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1787666, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.453591] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A90-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746453164) [2024-09-13 13:02:26.453630] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A90-0-0] [lt=33][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746453164}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.453664] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.453676] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.453682] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746453648) [2024-09-13 13:02:26.454014] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.454262] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.454487] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.454506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.454516] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746454516, replica_locations:[]}) [2024-09-13 13:02:26.454531] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.454551] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.454560] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.454584] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.454622] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554331739, cache_obj->added_lc()=false, cache_obj->get_object_id()=353, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.455081] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.455490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.455706] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.455729] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.455747] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746455746, replica_locations:[]}) [2024-09-13 13:02:26.455806] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=18000, remain_us=1810674, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.461572] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.462585] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.463955] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.465159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.465487] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=17] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:26.465631] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.466084] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.466761] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.467063] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.467304] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.468530] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.468796] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.468817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.468830] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746468829, replica_locations:[]}) [2024-09-13 13:02:26.468846] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.468898] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.468908] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.468927] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.468986] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554346103, cache_obj->added_lc()=false, cache_obj->get_object_id()=354, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.470003] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.470186] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.470375] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.470392] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.470403] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746470402, replica_locations:[]}) [2024-09-13 13:02:26.470490] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1766484, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.471018] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.474017] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.474229] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.474245] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.474255] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746474255, replica_locations:[]}) [2024-09-13 13:02:26.474270] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.474290] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.474300] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.474318] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.474356] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554351473, cache_obj->added_lc()=false, cache_obj->get_object_id()=355, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.475206] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.475377] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.475395] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.475403] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746475403, replica_locations:[]}) [2024-09-13 13:02:26.475455] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1791026, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.475547] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=15] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:26.479011] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D8E48925-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:26.479448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.480416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.487763] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.489214] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.489844] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.490711] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.490931] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.491224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.491245] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.491259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746491257, replica_locations:[]}) [2024-09-13 13:02:26.491272] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.491304] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.491313] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.491337] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.491394] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554368508, cache_obj->added_lc()=false, cache_obj->get_object_id()=356, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.492782] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.492985] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.493012] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.493026] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746493025, replica_locations:[]}) [2024-09-13 13:02:26.493082] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=21000, remain_us=1743891, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.494667] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.495020] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.495038] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.495049] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746495048, replica_locations:[]}) [2024-09-13 13:02:26.495059] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.495080] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.495089] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.495114] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.495153] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554372270, cache_obj->added_lc()=false, cache_obj->get_object_id()=357, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.496012] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.496184] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.496200] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.496209] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746496208, replica_locations:[]}) [2024-09-13 13:02:26.496251] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1770230, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.501448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.502546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.512801] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.513998] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.514175] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.514295] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.514563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.514596] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=31] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.514638] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=31] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746514636, replica_locations:[]}) [2024-09-13 13:02:26.514663] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.514703] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.514714] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.514740] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.514792] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554391909, cache_obj->added_lc()=false, cache_obj->get_object_id()=358, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.515078] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.515975] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.516234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.516251] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.516262] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746516261, replica_locations:[]}) [2024-09-13 13:02:26.516322] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1720652, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.516453] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.516740] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.516751] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.516759] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746516758, replica_locations:[]}) [2024-09-13 13:02:26.516769] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.516785] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.516791] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.516809] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.516845] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554393962, cache_obj->added_lc()=false, cache_obj->get_object_id()=359, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.517676] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.517882] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.517895] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.517904] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746517904, replica_locations:[]}) [2024-09-13 13:02:26.517943] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1748538, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.527545] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.528732] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.538621] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.538706] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.538930] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.538957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.538977] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746538975, replica_locations:[]}) [2024-09-13 13:02:26.538999] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.539041] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.539050] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.539079] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.539138] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554416252, cache_obj->added_lc()=false, cache_obj->get_object_id()=360, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.539278] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.539465] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.539480] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.539491] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746539490, replica_locations:[]}) [2024-09-13 13:02:26.539503] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.539520] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.539526] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.539550] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.539598] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554416715, cache_obj->added_lc()=false, cache_obj->get_object_id()=361, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.540249] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.540507] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:26.540589] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:26.540710] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.540723] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.540732] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746540731, replica_locations:[]}) [2024-09-13 13:02:26.540744] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.540758] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.540774] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746540773, replica_locations:[]}) [2024-09-13 13:02:26.540778] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1725703, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.540815] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1696158, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.553702] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A91-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746553250) [2024-09-13 13:02:26.553720] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.553733] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A91-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746553250}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.553745] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746553712) [2024-09-13 13:02:26.553758] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746353663, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.553792] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.553811] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.553818] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746553772) [2024-09-13 13:02:26.553838] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.553847] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.553855] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746553834) [2024-09-13 13:02:26.563265] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.563284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.563296] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746563296, replica_locations:[]}) [2024-09-13 13:02:26.563309] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.563328] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.563334] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.563356] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.563396] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554440513, cache_obj->added_lc()=false, cache_obj->get_object_id()=363, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.564185] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.564215] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.564234] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746564232, replica_locations:[]}) [2024-09-13 13:02:26.564252] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.564284] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.564296] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.564330] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.564425] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=37][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554441540, cache_obj->added_lc()=false, cache_obj->get_object_id()=362, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.564568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.564586] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.564595] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746564595, replica_locations:[]}) [2024-09-13 13:02:26.564647] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1701833, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.565867] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.565894] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.565904] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746565903, replica_locations:[]}) [2024-09-13 13:02:26.565945] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=24000, remain_us=1671028, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.588199] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.588228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.588243] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746588242, replica_locations:[]}) [2024-09-13 13:02:26.588255] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.588278] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.588284] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.588314] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.588358] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554465474, cache_obj->added_lc()=false, cache_obj->get_object_id()=364, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.589586] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.589606] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.589616] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746589615, replica_locations:[]}) [2024-09-13 13:02:26.589666] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1676815, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.590252] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.590278] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.590294] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746590293, replica_locations:[]}) [2024-09-13 13:02:26.590317] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.590347] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.590374] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.590401] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.590455] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554467573, cache_obj->added_lc()=false, cache_obj->get_object_id()=365, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.591680] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.591701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.591712] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746591711, replica_locations:[]}) [2024-09-13 13:02:26.591757] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1645216, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.614131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.614157] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.614171] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746614170, replica_locations:[]}) [2024-09-13 13:02:26.614184] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.614208] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.614222] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.614247] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.614303] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554491417, cache_obj->added_lc()=false, cache_obj->get_object_id()=366, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.615568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.615595] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.615606] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746615605, replica_locations:[]}) [2024-09-13 13:02:26.615684] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=25000, remain_us=1650797, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.617131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.617152] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.617167] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746617167, replica_locations:[]}) [2024-09-13 13:02:26.617201] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=31] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.617228] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.617241] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.617269] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.617319] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554494434, cache_obj->added_lc()=false, cache_obj->get_object_id()=367, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.618761] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.618783] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.618795] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746618794, replica_locations:[]}) [2024-09-13 13:02:26.618848] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1618126, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.620445] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=33] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:26.641194] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.641223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.641238] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746641237, replica_locations:[]}) [2024-09-13 13:02:26.641251] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.641275] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.641291] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.641329] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.641405] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554518517, cache_obj->added_lc()=false, cache_obj->get_object_id()=368, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.642956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.642979] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.642989] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746642988, replica_locations:[]}) [2024-09-13 13:02:26.643043] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1623438, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.645279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.645295] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.645307] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746645306, replica_locations:[]}) [2024-09-13 13:02:26.645319] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.645341] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.645349] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.645367] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.645407] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554522526, cache_obj->added_lc()=false, cache_obj->get_object_id()=369, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.646580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.646598] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.646614] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746646614, replica_locations:[]}) [2024-09-13 13:02:26.646656] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1590317, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.653341] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:26.653772] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A92-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746653324) [2024-09-13 13:02:26.653797] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A92-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746653324}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.653809] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:26.653820] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.653848] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746653804) [2024-09-13 13:02:26.653857] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746553769, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.653886] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.653892] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.653897] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746653865) [2024-09-13 13:02:26.665893] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=46] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:26.669530] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.669550] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.669562] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746669561, replica_locations:[]}) [2024-09-13 13:02:26.669573] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.669595] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.669605] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.669624] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.669668] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554546785, cache_obj->added_lc()=false, cache_obj->get_object_id()=370, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.670865] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.670895] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.670904] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746670904, replica_locations:[]}) [2024-09-13 13:02:26.670947] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1595533, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.674058] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.674075] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.674090] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746674090, replica_locations:[]}) [2024-09-13 13:02:26.674111] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.674137] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.674149] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.674181] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.674230] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554551345, cache_obj->added_lc()=false, cache_obj->get_object_id()=371, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.675458] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.675476] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.675489] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746675488, replica_locations:[]}) [2024-09-13 13:02:26.675536] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1561437, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.675652] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:26.698382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.698403] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.698415] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746698414, replica_locations:[]}) [2024-09-13 13:02:26.698430] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.698467] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.698479] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.698501] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.698548] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554575661, cache_obj->added_lc()=false, cache_obj->get_object_id()=372, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.699801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.699819] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.699828] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746699827, replica_locations:[]}) [2024-09-13 13:02:26.699868] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1566612, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.703944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.703963] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.703977] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746703977, replica_locations:[]}) [2024-09-13 13:02:26.703996] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.704024] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.704043] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.704064] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.704115] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554581231, cache_obj->added_lc()=false, cache_obj->get_object_id()=373, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.705325] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.705343] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.705356] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746705355, replica_locations:[]}) [2024-09-13 13:02:26.705422] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1531551, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.726680] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:26.726730] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=25] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:26.728338] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.728364] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.728382] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746728381, replica_locations:[]}) [2024-09-13 13:02:26.728401] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.728432] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.728455] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.728485] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.728543] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554605656, cache_obj->added_lc()=false, cache_obj->get_object_id()=374, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.729699] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.729719] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.729729] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746729728, replica_locations:[]}) [2024-09-13 13:02:26.729787] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1536694, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.731375] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=24][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:26.735261] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.735279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.735290] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746735289, replica_locations:[]}) [2024-09-13 13:02:26.735307] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.735330] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.735339] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.735359] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.735403] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6554612520, cache_obj->added_lc()=false, cache_obj->get_object_id()=375, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:26.736787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.736810] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.736824] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746736823, replica_locations:[]}) [2024-09-13 13:02:26.736890] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] will sleep(sleep_us=30000, remain_us=1500084, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.753811] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A93-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746753393) [2024-09-13 13:02:26.753839] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A93-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746753393}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.753869] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.753894] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.753900] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746753856) [2024-09-13 13:02:26.759291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.759316] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.759330] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746759328, replica_locations:[]}) [2024-09-13 13:02:26.759342] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.759364] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.759373] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:26.759398] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:26.760677] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.760703] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.760717] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746760716, replica_locations:[]}) [2024-09-13 13:02:26.760811] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1505670, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.767332] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.767358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.767372] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746767371, replica_locations:[]}) [2024-09-13 13:02:26.767384] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.767409] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.768853] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.768871] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.768894] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746768893, replica_locations:[]}) [2024-09-13 13:02:26.768941] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=31000, remain_us=1468032, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.791323] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.791345] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.791358] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746791357, replica_locations:[]}) [2024-09-13 13:02:26.791388] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.791417] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.792691] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.792710] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.792720] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746792719, replica_locations:[]}) [2024-09-13 13:02:26.792769] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1473712, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.800863] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.800893] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.800920] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746800919, replica_locations:[]}) [2024-09-13 13:02:26.800941] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.800964] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.802465] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.802485] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.802495] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746802495, replica_locations:[]}) [2024-09-13 13:02:26.802545] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1434429, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.824274] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.824298] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.824316] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746824314, replica_locations:[]}) [2024-09-13 13:02:26.824332] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.824360] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.825740] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.825759] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.825772] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746825771, replica_locations:[]}) [2024-09-13 13:02:26.825832] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=32000, remain_us=1440649, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.834313] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.834340] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=26][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:26.834393] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=32][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:26.834401] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:26.834418] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:26.835005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.835025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.835040] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746835038, replica_locations:[]}) [2024-09-13 13:02:26.835081] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.835109] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.836565] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.836583] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.836596] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746836595, replica_locations:[]}) [2024-09-13 13:02:26.836658] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1400315, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.853910] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.853939] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746853902) [2024-09-13 13:02:26.853931] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A94-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746853474) [2024-09-13 13:02:26.853949] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746653863, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.853972] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.853953] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A94-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746853474}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.853978] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.853983] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746853958) [2024-09-13 13:02:26.853993] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.854022] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.854028] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746853990) [2024-09-13 13:02:26.858263] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.858282] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.858293] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746858293, replica_locations:[]}) [2024-09-13 13:02:26.858305] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.858335] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.859240] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B44-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:26.859257] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B44-0-0] [lt=16][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203746858797], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:26.859649] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD4-0-0] [lt=1][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:26.859646] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.859659] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.859669] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746859668, replica_locations:[]}) [2024-09-13 13:02:26.859728] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=33000, remain_us=1406753, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.860572] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD4-0-0] [lt=11][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203746860258, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035379, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203746859839}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:26.860596] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD4-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:26.866259] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:26.870201] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.870222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.870236] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746870235, replica_locations:[]}) [2024-09-13 13:02:26.870260] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.870283] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.871721] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.871742] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.871752] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746871751, replica_locations:[]}) [2024-09-13 13:02:26.871798] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1365175, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.873127] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.873158] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=5] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.873723] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:26.875757] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:26.893250] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.893276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.893290] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746893288, replica_locations:[]}) [2024-09-13 13:02:26.893302] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.893324] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.894618] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.894637] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.894646] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746894646, replica_locations:[]}) [2024-09-13 13:02:26.894693] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=34000, remain_us=1371788, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.906312] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.906333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.906347] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746906346, replica_locations:[]}) [2024-09-13 13:02:26.906360] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.906391] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.907801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.907826] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.907842] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746907841, replica_locations:[]}) [2024-09-13 13:02:26.907923] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1329050, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.925969] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=34][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:141, tid:19945}, {errcode:-4721, dropped:2401, tid:20031}]) [2024-09-13 13:02:26.929141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.929165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.929175] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.929186] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.929201] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746929200, replica_locations:[]}) [2024-09-13 13:02:26.929221] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.929244] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:34, local_retry_times:34, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:26.929265] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.929282] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.929289] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.929294] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:26.929321] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:26.930296] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.930326] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.930625] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.930639] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.930648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.930658] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.930673] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746930672, replica_locations:[]}) [2024-09-13 13:02:26.930687] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.930701] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.930710] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.930726] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:26.930733] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:26.930744] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:26.930758] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:26.930768] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.930779] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.930787] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:26.930792] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:26.930802] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:26.930811] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.930823] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:26.930830] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:26.930835] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:26.930841] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:26.930850] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:26.930857] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:26.930870] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:26.930895] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.930903] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:26.930910] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:26.930921] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:26.930928] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=35, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.930951] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] will sleep(sleep_us=35000, remain_us=1335530, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.943408] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.943430] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.943455] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.943463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.943475] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746943474, replica_locations:[]}) [2024-09-13 13:02:26.943504] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.943520] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:35, local_retry_times:35, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:26.943537] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.943548] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.943556] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.943559] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:26.943571] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:26.944708] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.944735] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.945157] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.945177] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.945183] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.945190] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.945203] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746945202, replica_locations:[]}) [2024-09-13 13:02:26.945212] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.945222] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.945231] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.945247] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:26.945255] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:26.945267] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:26.945290] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:26.945306] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.945319] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.945328] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:26.945338] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:26.945345] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:26.945358] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.945371] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:26.945378] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:26.945388] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:26.945394] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:26.945410] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:26.945417] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:26.945433] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:26.945465] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=29][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.945474] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:26.945486] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:26.945497] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:26.945503] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=36, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.945520] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] will sleep(sleep_us=36000, remain_us=1291453, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:26.953998] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A95-0-0] [lt=25][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746953549) [2024-09-13 13:02:26.954025] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A95-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203746953549}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:26.954046] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:26.954082] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=35][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203746954038) [2024-09-13 13:02:26.954095] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746853956, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:26.954125] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.954134] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:26.954140] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203746954111) [2024-09-13 13:02:26.966366] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.966387] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.966397] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.966407] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.966421] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746966419, replica_locations:[]}) [2024-09-13 13:02:26.966446] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.966466] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:35, local_retry_times:35, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:26.966483] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.966500] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.966506] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.966512] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:26.966532] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:26.967479] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.967507] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.967880] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.967911] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.967924] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.967934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.967946] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746967945, replica_locations:[]}) [2024-09-13 13:02:26.967963] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.967978] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.967988] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.968003] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:26.968010] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:26.968017] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:26.968034] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:26.968044] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.968054] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.968062] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:26.968067] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:26.968076] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:26.968085] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.968097] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:26.968104] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:26.968109] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:26.968119] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:26.968125] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:26.968131] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:26.968144] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:26.968156] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.968163] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:26.968172] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:26.968180] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:26.968186] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=36, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.968207] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] will sleep(sleep_us=36000, remain_us=1298273, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:26.982014] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.982035] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.982052] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.982062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.982080] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746982079, replica_locations:[]}) [2024-09-13 13:02:26.982096] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:26.982118] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:36, local_retry_times:36, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:26.982139] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:26.982153] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.982162] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:26.982168] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:26.982206] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:26.983265] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.983296] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=30][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.984725] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.984771] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:26.984822] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=50] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:26.984833] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:26.984850] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203746984849, replica_locations:[]}) [2024-09-13 13:02:26.984900] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=46][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.984931] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=30][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:26.984948] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:26.984986] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=37][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:26.985001] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:26.985010] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:26.985028] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:26.985038] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.985046] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:26.985056] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:26.985062] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:26.985068] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:26.985081] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:26.985089] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:26.985102] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:26.985108] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:26.985113] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:26.985119] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:26.985129] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:26.985143] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:26.985154] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:26.985161] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:26.985168] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:26.985179] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:26.985185] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=37, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:26.985207] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] will sleep(sleep_us=37000, remain_us=1251766, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.005482] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.005519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.005529] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.005541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.005558] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747005556, replica_locations:[]}) [2024-09-13 13:02:27.005574] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.005601] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:36, local_retry_times:36, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:27.005620] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.005639] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:27.005651] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:27.005656] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:27.005685] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:27.007073] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.007108] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=33][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:27.007494] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.007517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.007526] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.007562] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=33] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.007576] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747007575, replica_locations:[]}) [2024-09-13 13:02:27.007594] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.007608] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:27.007639] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=30][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.007660] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:27.007668] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:27.007680] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:27.007694] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:27.007730] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=35][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:27.007739] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:27.007750] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:27.007757] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:27.007763] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:27.007774] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:27.007807] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=31][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:27.007815] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:27.007822] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:27.007832] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:27.007839] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:27.007846] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:27.007888] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:27.007904] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:27.007912] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:27.007919] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:27.007930] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:27.007937] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=37, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:27.007981] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=33] will sleep(sleep_us=37000, remain_us=1258500, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.023127] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.023160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.023171] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.023183] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.023199] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747023198, replica_locations:[]}) [2024-09-13 13:02:27.023220] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.023267] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=36][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:37, local_retry_times:37, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:27.023287] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.023302] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:27.023309] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:27.023336] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:27.023363] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:27.024587] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.024616] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:27.024992] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.025021] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.025031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.025041] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.025055] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747025054, replica_locations:[]}) [2024-09-13 13:02:27.025073] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.025118] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=44][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:27.025127] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.025145] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:27.025154] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:27.025162] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:27.025181] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:27.025197] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:27.025204] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:27.025216] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:27.025222] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:27.025229] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:27.025261] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=29][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:27.025268] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:27.025276] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:27.025280] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:27.025284] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:27.025288] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:27.025296] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:27.025308] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:27.025314] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:27.025322] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:27.025327] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:27.025332] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:27.025344] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=38, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:27.025360] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] will sleep(sleep_us=38000, remain_us=1211614, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.026130] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=26][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4719, dropped:129, tid:20300}]) [2024-09-13 13:02:27.045240] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.045534] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.045556] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.045563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.045572] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.045585] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747045584, replica_locations:[]}) [2024-09-13 13:02:27.045602] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.045622] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:37, local_retry_times:37, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:27.045640] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.045650] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:27.045660] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:27.045666] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:27.045685] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:27.046550] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.046851] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.046894] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=42][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:27.047010] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.047242] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.047263] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.047272] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.047288] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.047300] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747047299, replica_locations:[]}) [2024-09-13 13:02:27.047320] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:27.047378] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=38000, remain_us=1219103, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.047958] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.052175] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.053499] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.054032] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A96-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747053627) [2024-09-13 13:02:27.054068] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A96-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747053627}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.054114] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.054133] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.054143] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747054097) [2024-09-13 13:02:27.063565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.063778] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.063802] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.063812] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.063823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.063843] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747063842, replica_locations:[]}) [2024-09-13 13:02:27.063864] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.063903] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.065114] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.065308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.065332] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.065342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.065352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.065367] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747065366, replica_locations:[]}) [2024-09-13 13:02:27.065433] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1171541, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.066590] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:27.078544] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] Cache replace map node details(ret=0, replace_node_count=0, replace_time=2691, replace_start_pos=314570, replace_num=62914) [2024-09-13 13:02:27.078566] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:27.085598] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.085898] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.085923] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.085930] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.085937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.085947] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747085947, replica_locations:[]}) [2024-09-13 13:02:27.085961] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.085985] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.087070] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.087253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.087275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.087281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.087289] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.087297] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747087297, replica_locations:[]}) [2024-09-13 13:02:27.087373] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=39000, remain_us=1179108, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.087993] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.088389] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.089751] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.090209] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.093033] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.093654] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=15] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.093686] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=11] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.094012] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=10] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.094350] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.094408] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.094898] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.095305] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.096341] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.104692] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.104982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.105009] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.105019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.105030] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.105053] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747105052, replica_locations:[]}) [2024-09-13 13:02:27.105075] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.105104] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.106359] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.106608] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.106634] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.106641] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.106647] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.106656] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747106656, replica_locations:[]}) [2024-09-13 13:02:27.106707] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1130266, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.118829] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=20] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:27.125869] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.126596] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.126967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.126992] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.126999] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.127007] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.127017] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747127016, replica_locations:[]}) [2024-09-13 13:02:27.127028] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.127063] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.127522] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=189][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.128253] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.128506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.128528] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.128534] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.128542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.128553] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747128552, replica_locations:[]}) [2024-09-13 13:02:27.128605] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1137876, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.131509] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.132798] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.133821] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC7C-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.144710] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21BD-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.145341] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21C1-0-0] [lt=38][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.145616] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21C2-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.146254] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=21] PNIO [ratelimit] time: 1726203747146253, bytes: 3359286, bw: 0.252393 MB/s, add_ts: 1000268, add_bytes: 264724 [2024-09-13 13:02:27.146286] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21C6-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.146545] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21C7-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.146927] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.146949] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20301][T1_L0_G9][T1][YB42AC103326-00062119ECDB21CB-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.147145] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21CC-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.147148] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.147165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.147171] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.147184] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.147196] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747147195, replica_locations:[]}) [2024-09-13 13:02:27.147210] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.147231] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.147509] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21D0-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.147751] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21D1-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.148088] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21D5-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.148399] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.148566] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.148581] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.148592] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.148604] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.148618] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747148617, replica_locations:[]}) [2024-09-13 13:02:27.148661] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1088312, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.154081] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A97-0-0] [lt=30][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747153694) [2024-09-13 13:02:27.154111] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A97-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747153694}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.154132] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.154164] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747154126) [2024-09-13 13:02:27.154180] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203746954108, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.154203] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.154216] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.154221] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747154191) [2024-09-13 13:02:27.164093] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.165404] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.168814] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.169080] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.169097] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.169104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.169111] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.169119] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747169119, replica_locations:[]}) [2024-09-13 13:02:27.169129] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.169146] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.170044] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.170259] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.170276] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.170283] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.170289] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.170301] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747170300, replica_locations:[]}) [2024-09-13 13:02:27.170342] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1096139, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.175278] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.176557] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.189897] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.190145] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.190164] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.190170] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.190181] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.190192] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747190191, replica_locations:[]}) [2024-09-13 13:02:27.190206] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.190238] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.191410] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.191601] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.191645] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=41][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.191659] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.191667] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.191680] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747191679, replica_locations:[]}) [2024-09-13 13:02:27.191725] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1045249, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.197286] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=18] PNIO [ratelimit] time: 1726203747197285, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007603, add_bytes: 0 [2024-09-13 13:02:27.199151] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E1-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.202899] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.204276] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.204828] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C82-0-0] [lt=23] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.211552] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.211925] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=26] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:27.212035] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.212062] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.212069] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.212077] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.212090] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747212089, replica_locations:[]}) [2024-09-13 13:02:27.212102] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.212124] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.213168] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.213452] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.213469] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.213474] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.213484] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.213492] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747213492, replica_locations:[]}) [2024-09-13 13:02:27.213545] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1052935, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.220038] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.221334] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.226407] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=21][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-5019, dropped:14, tid:19878}]) [2024-09-13 13:02:27.226772] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:27.226812] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:27.228978] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=15] gc stale ls task succ [2024-09-13 13:02:27.233462] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=19] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:27.233930] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.234243] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.234275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.234281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.234288] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.234309] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747234308, replica_locations:[]}) [2024-09-13 13:02:27.234322] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.234340] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.235600] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.235864] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.235893] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.235899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.235907] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.235919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747235918, replica_locations:[]}) [2024-09-13 13:02:27.235969] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=1001005, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.237634] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=0] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:27.237651] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:27.237658] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:27.237665] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:27.242843] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.244319] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C85-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.244412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.244524] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.244554] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.244564] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.244579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.244618] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=8][errcode=0] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:27.245674] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:27.245700] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=24][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:27.245711] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:27.245719] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=8][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:27.245728] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=7][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:27.245734] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:27.245743] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:27.245808] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=63][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:27.245822] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=13][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:27.245829] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:27.245835] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:27.245845] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:27.245852] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:27.245859] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:27.245887] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=10][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:27.245894] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:27.245904] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:27.245911] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:27.245921] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:27.245929] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:27.245940] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:27.245960] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=14][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:27.245975] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:27.245985] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:27.245990] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:27.246009] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:27.246021] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=1][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:27.246029] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:27.246039] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=9][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:27.246046] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:27.246053] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203747245510, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:27.246069] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=15][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:27.246076] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:27.246144] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:27.246156] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=11][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:27.246164] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=8][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:27.246171] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:27.246180] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=6][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:27.246192] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=10][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:27.246198] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C85-0-0] [lt=5][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:27.254207] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.254224] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.254213] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A98-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747253770) [2024-09-13 13:02:27.254231] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747254192) [2024-09-13 13:02:27.254244] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.254233] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A98-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747253770}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.254257] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747254240) [2024-09-13 13:02:27.254268] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747154189, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.254276] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:27.254282] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:27.254294] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.254297] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.254301] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747254291) [2024-09-13 13:02:27.255738] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.256008] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.256025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.256031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.256045] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.256060] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747256059, replica_locations:[]}) [2024-09-13 13:02:27.256071] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.256093] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.257070] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.257310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.257330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.257336] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.257344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.257357] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747257356, replica_locations:[]}) [2024-09-13 13:02:27.257405] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=1009075, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.257683] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=11] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:27.257709] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=24][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:27.257717] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:27.257728] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=10][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:27.257734] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:27.257739] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=5][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:27.257745] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:27.257750] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:27.257757] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=6][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:27.257761] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:27.257765] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=3][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:27.257769] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:27.257774] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:27.257778] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:27.257782] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:27.257786] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:27.257790] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:27.257798] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:27.257802] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:27.257808] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:27.257812] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=3][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:27.257816] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:27.257825] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=8][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:27.257830] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:27.257840] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=7][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:27.257853] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:27.257857] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:27.257861] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:27.257887] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=3][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:27.257895] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C80-0-0] [lt=9][errcode=0] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:27.257901] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C80-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:27.257910] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:27.257915] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:27.257920] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:27.257925] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203747257464, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:27.257931] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:27.257940] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:27.257982] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4] ls blacklist refresh finish(cost_time=1372) [2024-09-13 13:02:27.265870] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.266932] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:27.267400] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.278648] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=7] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:27.279186] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.279463] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.279497] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.279507] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.279515] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.279530] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747279529, replica_locations:[]}) [2024-09-13 13:02:27.279542] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.279563] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:27.279576] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.280767] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.280984] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.281002] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.281008] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.281024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.281034] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747281033, replica_locations:[]}) [2024-09-13 13:02:27.281082] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=44000, remain_us=955891, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.284076] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.285479] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=40][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.300663] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.300925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=50][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.300948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.300955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.300962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.300971] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747300971, replica_locations:[]}) [2024-09-13 13:02:27.300986] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.301001] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:27.301017] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.302007] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.302170] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.302192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.302213] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.302222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.302233] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747302233, replica_locations:[]}) [2024-09-13 13:02:27.302278] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=964202, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.313152] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.314633] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.325333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.325628] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.325657] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.325668] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.325708] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.325730] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747325729, replica_locations:[]}) [2024-09-13 13:02:27.325753] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.325817] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.325989] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.326562] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=22][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:0, dropped:95, tid:20197}]) [2024-09-13 13:02:27.327577] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.327800] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=40][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.327970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.327984] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.327990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.327997] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.328006] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747328005, replica_locations:[]}) [2024-09-13 13:02:27.328054] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=45000, remain_us=908919, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.334958] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:27.335006] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:27.335024] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=17] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:27.335035] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:27.335024] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CAB-0-0] [lt=35][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203747334983}) [2024-09-13 13:02:27.346549] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.346804] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.346826] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.346837] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.346848] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.346864] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747346862, replica_locations:[]}) [2024-09-13 13:02:27.346911] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=45] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.346946] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.346955] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.346980] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.347038] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555224149, cache_obj->added_lc()=false, cache_obj->get_object_id()=406, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.348317] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.348544] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.348567] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.348577] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.348591] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.348607] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747348606, replica_locations:[]}) [2024-09-13 13:02:27.348672] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=917809, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.348681] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=15] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:27.354311] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.354334] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747354305) [2024-09-13 13:02:27.354344] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747254274, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.354343] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A99-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747353873) [2024-09-13 13:02:27.354365] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.354374] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.354362] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A99-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747353873}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.354379] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747354351) [2024-09-13 13:02:27.354390] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.354394] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.354397] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747354388) [2024-09-13 13:02:27.359727] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B45-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:27.359745] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B45-0-0] [lt=17][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203747359277], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:27.360294] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD5-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.360858] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD5-0-0] [lt=17][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203747360546, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035420, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203747359814}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:27.360914] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD5-0-0] [lt=55][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.361311] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.362920] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.369084] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.370363] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.373221] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.373490] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.373510] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.373517] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.373524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.373543] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747373542, replica_locations:[]}) [2024-09-13 13:02:27.373555] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.373576] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.373582] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.373601] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.373647] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555250765, cache_obj->added_lc()=false, cache_obj->get_object_id()=407, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.374748] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.375618] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.375639] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.375645] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.375653] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.375669] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747375668, replica_locations:[]}) [2024-09-13 13:02:27.375721] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=861253, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.393917] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.394175] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.394199] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.394210] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.394227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.394244] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747394243, replica_locations:[]}) [2024-09-13 13:02:27.394272] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=26] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.394300] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.394312] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.394339] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.394396] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555271509, cache_obj->added_lc()=false, cache_obj->get_object_id()=408, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.395616] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.395964] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.396006] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=41][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.396016] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.396026] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.396041] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747396040, replica_locations:[]}) [2024-09-13 13:02:27.396110] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=870370, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.410654] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.412136] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.412986] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.414277] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.421959] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.422197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.422217] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.422224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.422232] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.422244] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747422243, replica_locations:[]}) [2024-09-13 13:02:27.422265] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.422291] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.422299] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.422332] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.422378] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555299496, cache_obj->added_lc()=false, cache_obj->get_object_id()=409, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.423553] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.423787] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.423808] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.423815] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.423823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.423836] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747423835, replica_locations:[]}) [2024-09-13 13:02:27.423898] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=813076, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.432299] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169005E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.442315] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.442569] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.442589] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.442599] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.442609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.442626] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747442625, replica_locations:[]}) [2024-09-13 13:02:27.442646] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.442673] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.442685] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.442710] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.442772] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555319887, cache_obj->added_lc()=false, cache_obj->get_object_id()=410, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.443721] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.444101] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.444125] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.444134] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.444143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.444155] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747444154, replica_locations:[]}) [2024-09-13 13:02:27.444215] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=822266, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.454432] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9A-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747453959) [2024-09-13 13:02:27.454467] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.454491] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747454461) [2024-09-13 13:02:27.454467] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9A-0-0] [lt=34][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747453959}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.454504] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747354350, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.454533] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.454542] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.454549] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747454517) [2024-09-13 13:02:27.454562] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.454569] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.454579] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747454559) [2024-09-13 13:02:27.457757] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.459185] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.460894] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.462444] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.467253] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:27.471094] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.471385] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.471406] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.471419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.471429] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.471457] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747471456, replica_locations:[]}) [2024-09-13 13:02:27.471469] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.471495] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.471504] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.471524] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.471570] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555348688, cache_obj->added_lc()=false, cache_obj->get_object_id()=411, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.472680] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.473053] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.473073] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.473104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=30] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.473117] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.473130] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747473129, replica_locations:[]}) [2024-09-13 13:02:27.473188] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=763786, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.476460] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=23][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:27.478739] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:27.491405] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.491669] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.491687] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.491693] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.491700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.491709] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747491709, replica_locations:[]}) [2024-09-13 13:02:27.491720] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.491738] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.491744] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.491767] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.491807] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555368925, cache_obj->added_lc()=false, cache_obj->get_object_id()=412, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.492707] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.492965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.493007] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.493017] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.493029] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.493042] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747493041, replica_locations:[]}) [2024-09-13 13:02:27.493095] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=773386, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.503689] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.505009] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.512337] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.514253] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.516353] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=21][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:27.521371] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.521624] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.521645] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.521651] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.521659] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.521674] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747521673, replica_locations:[]}) [2024-09-13 13:02:27.521688] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.521721] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.521730] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.521752] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.521819] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555398935, cache_obj->added_lc()=false, cache_obj->get_object_id()=413, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.522889] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.523160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.523180] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.523186] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.523193] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.523202] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747523202, replica_locations:[]}) [2024-09-13 13:02:27.523248] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=713726, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.541283] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.541563] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.541577] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.541591] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.541598] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.541607] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747541607, replica_locations:[]}) [2024-09-13 13:02:27.541618] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.541635] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.541641] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.541658] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.541695] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555418813, cache_obj->added_lc()=false, cache_obj->get_object_id()=414, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.542604] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.542819] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.542833] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.542845] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.542852] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.542860] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747542859, replica_locations:[]}) [2024-09-13 13:02:27.542922] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=723559, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.550525] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.551956] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.554545] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9B-0-0] [lt=38][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747554051) [2024-09-13 13:02:27.554578] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9B-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747554051}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.554602] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.554635] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747554595) [2024-09-13 13:02:27.554655] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747454515, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.554683] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.554697] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.554705] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747554669) [2024-09-13 13:02:27.564978] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.566398] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.572484] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.572756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.572781] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.572788] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.572796] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.572812] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747572811, replica_locations:[]}) [2024-09-13 13:02:27.572833] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.572865] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.572885] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.572915] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.572975] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555450089, cache_obj->added_lc()=false, cache_obj->get_object_id()=415, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.574258] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.574464] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.574484] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.574490] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.574499] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.574516] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747574515, replica_locations:[]}) [2024-09-13 13:02:27.574582] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=662391, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.592160] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.592457] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.592478] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.592492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.592501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.592513] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747592512, replica_locations:[]}) [2024-09-13 13:02:27.592525] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.592547] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.592553] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.592575] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.592670] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555469734, cache_obj->added_lc()=false, cache_obj->get_object_id()=416, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.593750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.593981] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.593996] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.594007] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.594015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.594025] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747594024, replica_locations:[]}) [2024-09-13 13:02:27.594075] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=50000, remain_us=672406, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.598472] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.599848] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.617977] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.619419] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.621179] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=45] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:27.624845] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.625084] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.625110] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.625117] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.625135] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.625160] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747625159, replica_locations:[]}) [2024-09-13 13:02:27.625176] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.625202] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.625209] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.625229] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.625275] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555502392, cache_obj->added_lc()=false, cache_obj->get_object_id()=417, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.626426] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.626632] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.626667] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.626674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.626681] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.626696] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747626695, replica_locations:[]}) [2024-09-13 13:02:27.626749] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=610224, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.644294] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.644631] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.644653] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.644664] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.644676] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.644691] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747644690, replica_locations:[]}) [2024-09-13 13:02:27.644709] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.644736] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.644745] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.644777] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.644830] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555521943, cache_obj->added_lc()=false, cache_obj->get_object_id()=418, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.646167] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.646397] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.646420] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.646431] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.646466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=33] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.646478] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747646478, replica_locations:[]}) [2024-09-13 13:02:27.646543] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=619937, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.647521] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.649020] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.654662] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.654680] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.654687] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747654646) [2024-09-13 13:02:27.667612] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=30] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:27.672050] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.673863] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=39][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.677972] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.678270] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.678291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.678298] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.678306] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.678319] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747678318, replica_locations:[]}) [2024-09-13 13:02:27.678334] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.678356] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.678365] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.678400] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.678452] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555555570, cache_obj->added_lc()=false, cache_obj->get_object_id()=419, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.678827] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:27.679593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.679847] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.679867] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.679883] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.679894] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.679906] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747679905, replica_locations:[]}) [2024-09-13 13:02:27.679955] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=557018, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.697762] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.697748] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=37][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.697993] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.698017] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.698038] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.698049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.698061] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747698061, replica_locations:[]}) [2024-09-13 13:02:27.698078] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.698100] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.698113] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.698170] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.698226] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555575340, cache_obj->added_lc()=false, cache_obj->get_object_id()=420, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.699236] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=48][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.699364] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.699552] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.699575] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.699595] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.699611] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.699627] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747699626, replica_locations:[]}) [2024-09-13 13:02:27.699684] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=52000, remain_us=566797, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.726853] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:27.726899] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:27.727493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.729137] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.732149] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.732423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.732480] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=55][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.732492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.732500] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.732514] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747732513, replica_locations:[]}) [2024-09-13 13:02:27.732528] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.732551] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.732560] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.732579] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.732620] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555609738, cache_obj->added_lc()=false, cache_obj->get_object_id()=421, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.733725] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.733942] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.733965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.733971] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.733981] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.733992] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747733991, replica_locations:[]}) [2024-09-13 13:02:27.734040] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=502934, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.748954] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.750375] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.751905] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.752115] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.752134] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.752140] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.752148] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.752169] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747752168, replica_locations:[]}) [2024-09-13 13:02:27.752183] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.752206] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.752215] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.752233] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.752276] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555629393, cache_obj->added_lc()=false, cache_obj->get_object_id()=422, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.752773] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=30][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:27.753297] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.753528] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.753550] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.753557] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.753565] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.753583] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747753582, replica_locations:[]}) [2024-09-13 13:02:27.753635] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=512846, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.754636] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9C-0-0] [lt=19][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747754161) [2024-09-13 13:02:27.754671] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9C-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747754161}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.754691] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:27.754702] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.754725] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747754686) [2024-09-13 13:02:27.754736] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747554666, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.754755] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.754760] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.754765] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747754744) [2024-09-13 13:02:27.783692] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.785247] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.787241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.787531] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.787569] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.787581] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.787589] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.787601] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747787601, replica_locations:[]}) [2024-09-13 13:02:27.787621] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.787667] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.787680] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.787707] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.787767] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555664883, cache_obj->added_lc()=false, cache_obj->get_object_id()=423, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.788966] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.789223] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.789241] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.789247] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.789257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.789269] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747789268, replica_locations:[]}) [2024-09-13 13:02:27.789326] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=54000, remain_us=447648, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.801055] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.802897] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.806891] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.807197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.807223] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.807234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.807245] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.807265] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747807264, replica_locations:[]}) [2024-09-13 13:02:27.807280] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.807303] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.807315] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.807343] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.807399] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555684511, cache_obj->added_lc()=false, cache_obj->get_object_id()=424, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.808640] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.808895] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.808924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.808934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.808945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.809001] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=51] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747809000, replica_locations:[]}) [2024-09-13 13:02:27.809062] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=457419, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.835572] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:27.840823] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.842210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.843541] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.843864] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.843895] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.843905] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.843916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.843926] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747843926, replica_locations:[]}) [2024-09-13 13:02:27.843938] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.843959] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.843965] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.843983] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.844033] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555721148, cache_obj->added_lc()=false, cache_obj->get_object_id()=425, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.845343] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.845610] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.845637] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.845646] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.845653] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.845678] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747845677, replica_locations:[]}) [2024-09-13 13:02:27.845741] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=391232, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.854683] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9D-0-0] [lt=32][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747854233) [2024-09-13 13:02:27.854711] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9D-0-0] [lt=19][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203747854233}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:27.854749] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.854761] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.854771] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747854734) [2024-09-13 13:02:27.854795] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.856265] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.860147] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B46-0-0] [lt=28] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:27.860164] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B46-0-0] [lt=16][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203747859753], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:27.860675] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD6-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.861306] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD6-0-0] [lt=21][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203747860989, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035432, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203747860508}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:27.861334] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD6-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:27.863260] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.863565] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.863587] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.863599] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.863606] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.863628] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747863627, replica_locations:[]}) [2024-09-13 13:02:27.863638] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.863657] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.863662] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.863679] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.863718] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555740835, cache_obj->added_lc()=false, cache_obj->get_object_id()=426, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.864705] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.864951] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.864972] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.865000] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=27] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.865011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.865023] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747865022, replica_locations:[]}) [2024-09-13 13:02:27.865078] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=401402, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.867959] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:27.872892] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.873433] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=16] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5455, clean_start_pos=629145, clean_num=125829) [2024-09-13 13:02:27.873766] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.874062] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=6] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:27.878885] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:27.898895] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.900430] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.900947] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=41][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.901230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.901250] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.901256] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.901276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.901289] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747901288, replica_locations:[]}) [2024-09-13 13:02:27.901302] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.901324] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.901333] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.901356] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.901405] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555778523, cache_obj->added_lc()=false, cache_obj->get_object_id()=427, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.902469] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.902867] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.902893] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.902899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.902913] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.902923] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747902922, replica_locations:[]}) [2024-09-13 13:02:27.902965] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1] will sleep(sleep_us=56000, remain_us=334008, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.908977] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.910546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.920323] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.920641] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.920660] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.920666] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.920674] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.920684] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747920684, replica_locations:[]}) [2024-09-13 13:02:27.920705] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.920726] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.920732] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.920750] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.920792] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555797907, cache_obj->added_lc()=false, cache_obj->get_object_id()=428, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.921780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.922030] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.922053] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.922062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.922071] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.922085] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747922085, replica_locations:[]}) [2024-09-13 13:02:27.922152] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=344329, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.954809] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:27.954837] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203747954802) [2024-09-13 13:02:27.954847] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747754743, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:27.954869] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.954896] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:27.954902] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203747954854) [2024-09-13 13:02:27.957972] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.959183] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.959334] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.959529] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.959550] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.959560] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.959572] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.959589] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747959588, replica_locations:[]}) [2024-09-13 13:02:27.959612] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.959641] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.959655] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.959674] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.959715] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555836832, cache_obj->added_lc()=false, cache_obj->get_object_id()=429, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.960943] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.961319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.961338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.961345] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.961352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.961361] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747961360, replica_locations:[]}) [2024-09-13 13:02:27.961414] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=275560, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:27.964234] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.965674] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.978349] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.978629] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.978650] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.978657] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.978664] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.978675] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747978674, replica_locations:[]}) [2024-09-13 13:02:27.978686] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:27.978705] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:27.978711] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:27.978737] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:27.978781] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555855897, cache_obj->added_lc()=false, cache_obj->get_object_id()=430, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:27.979755] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:27.979976] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.979995] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:27.980001] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:27.980008] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:27.980016] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203747980015, replica_locations:[]}) [2024-09-13 13:02:27.980058] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1] will sleep(sleep_us=57000, remain_us=286422, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:27.993194] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=28][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.017911] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.018617] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.018948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.018975] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.018982] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.018990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.019003] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748019002, replica_locations:[]}) [2024-09-13 13:02:28.019015] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.019037] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.019043] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.019068] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.019112] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555896230, cache_obj->added_lc()=false, cache_obj->get_object_id()=431, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.019425] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.020201] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.020414] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.020695] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.020721] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.020728] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.020735] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.020745] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748020744, replica_locations:[]}) [2024-09-13 13:02:28.020800] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=216173, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:28.021688] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=53][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.027689] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=20][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1281, tid:19944}]) [2024-09-13 13:02:28.037278] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.037568] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.037588] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.037595] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.037618] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.037627] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748037626, replica_locations:[]}) [2024-09-13 13:02:28.037638] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.037653] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:57, local_retry_times:57, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.037666] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.037672] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.037680] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.037684] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.037688] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.037700] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.037707] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.037755] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555914870, cache_obj->added_lc()=false, cache_obj->get_object_id()=432, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.038674] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=69][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.038704] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=29][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.038785] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.039037] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.039052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.039058] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.039065] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.039072] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748039072, replica_locations:[]}) [2024-09-13 13:02:28.039081] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.039093] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.039099] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.039107] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:28.039112] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:28.039117] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:28.039127] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:28.039134] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.039139] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.039144] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:28.039148] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:28.039152] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:28.039159] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.039166] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:28.039170] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:28.039174] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:28.039178] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:28.039182] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:28.039186] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:28.039196] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:28.039201] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.039206] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:28.039210] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:28.039215] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:28.039221] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=58, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.039235] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] will sleep(sleep_us=58000, remain_us=227245, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:28.047160] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1921) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=5] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-09-13 13:02:28.047179] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1462) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=16] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:28.047187] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1479) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=7] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:28.047194] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2678) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=5] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:28.049604] INFO [SQL.PC] dump_all_objs (ob_plan_cache.cpp:2397) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=4] Dumping All Cache Objs(alloc_obj_list.count()=3, alloc_obj_list=[{obj_id:206, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:2, added_to_lc:true, mem_used:157887}, {obj_id:433, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}, {obj_id:434, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}]) [2024-09-13 13:02:28.049632] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2686) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=26] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:28.054814] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9E-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748054356) [2024-09-13 13:02:28.054840] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9E-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748054356}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.054871] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.054893] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.054901] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748054856) [2024-09-13 13:02:28.058340] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.059676] INFO [PALF] runTimerTask (block_gc_timer_task.cpp:101) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=29] BlockGCTimerTask success(ret=0, cost_time_us=12, palf_env_impl_={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:28.073761] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=19] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:28.077319] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.078772] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.078977] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:28.079020] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.079063] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.079290] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.079308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.079314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.079322] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.079334] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748079333, replica_locations:[]}) [2024-09-13 13:02:28.079360] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.079377] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:58, local_retry_times:58, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.079392] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.079397] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.079406] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.079410] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.079413] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.079425] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.079433] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.079502] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555956616, cache_obj->added_lc()=false, cache_obj->get_object_id()=433, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.080416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.080596] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.080625] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.080744] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.080982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.080999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.081005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.081011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.081021] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748081020, replica_locations:[]}) [2024-09-13 13:02:28.081035] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.081045] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.081053] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.081070] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:28.081076] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:28.081081] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:28.081091] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:28.081098] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.081103] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.081109] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:28.081113] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:28.081117] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:28.081123] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.081131] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:28.081135] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:28.081139] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:28.081142] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:28.081147] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:28.081151] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:28.081161] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:28.081167] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.081171] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:28.081176] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:28.081181] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:28.081185] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=59, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.081202] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] will sleep(sleep_us=59000, remain_us=155772, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:28.093560] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=13] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.093588] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.093720] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=22] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.094619] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=11] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.094659] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.094898] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.095281] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.095408] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.096030] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.097500] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.097715] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.097740] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.097747] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.097761] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.097770] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748097769, replica_locations:[]}) [2024-09-13 13:02:28.097781] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.097796] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:58, local_retry_times:58, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.097810] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.097816] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.097823] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.097827] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.097831] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.097858] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.097865] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.097934] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=32][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6555975050, cache_obj->added_lc()=false, cache_obj->get_object_id()=434, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.098831] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.098859] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.098981] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.099183] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.099196] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.099217] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.099223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.099232] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748099231, replica_locations:[]}) [2024-09-13 13:02:28.099241] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.099259] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.099265] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.099273] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:28.099278] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:28.099283] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:28.099292] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:28.099300] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.099304] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.099309] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:28.099313] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:28.099317] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:28.099326] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.099332] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:28.099336] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:28.099340] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:28.099343] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:28.099348] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:28.099352] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:28.099362] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:28.099367] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.099372] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:28.099376] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:28.099381] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:28.099387] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=59, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.099402] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] will sleep(sleep_us=59000, remain_us=167079, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:28.118919] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=19] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:28.119801] INFO [PALF] log_loop_ (log_loop_thread.cpp:155) [20122][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=19] LogLoopThread round_cost_time(us)(round_cost_time=2) [2024-09-13 13:02:28.119831] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:28.121324] INFO [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:92) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9] run rewrite rule refresh task(rule_mgr_->tenant_id_=1) [2024-09-13 13:02:28.121361] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=16][errcode=0] server is initiating(server_id=0, local_seq=39, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:28.123211] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, table_name.ptr()="data_size:14, data:5F5F616C6C5F7379735F73746174", ret=-5019) [2024-09-13 13:02:28.123238] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=26][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, ret=-5019) [2024-09-13 13:02:28.123252] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=12][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_sys_stat, db_name=oceanbase) [2024-09-13 13:02:28.123263] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=10][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_sys_stat) [2024-09-13 13:02:28.123274] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=8][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:28.123281] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:28.123292] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_sys_stat' doesn't exist [2024-09-13 13:02:28.123300] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:28.123308] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:28.123314] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:28.123322] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:28.123329] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:28.123337] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=8][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:28.123345] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:28.123359] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:28.123367] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.123375] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.123384] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:28.123391] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:28.123399] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:28.123407] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.123423] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=12][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:28.123451] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=24][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:28.123458] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:28.123464] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=6][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:28.123487] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=7][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:28.123503] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.123511] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7E-0-0] [lt=8][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, aret=-5019, ret=-5019) [2024-09-13 13:02:28.123521] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:28.123530] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:28.123539] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:28.123548] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203748122999, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:28.123558] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:28.123566] WDIAG [SHARE] fetch_max_id (ob_max_id_fetcher.cpp:482) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute sql failed(sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:28.123598] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=28][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.123641] WDIAG [SQL.QRR] fetch_max_rule_version (ob_udr_sql_service.cpp:141) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] failed to fetch max rule version(ret=-5019, tenant_id=1) [2024-09-13 13:02:28.123655] WDIAG [SQL.QRR] sync_rule_from_inner_table (ob_udr_mgr.cpp:251) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-5019] failed to fetch max rule version(ret=-5019) [2024-09-13 13:02:28.123664] WDIAG [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:94) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] failed to sync rule from inner table(ret=-5019) [2024-09-13 13:02:28.134360] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC7D-0-0] [lt=25][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:28.135320] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.136866] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.140421] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.140799] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.140819] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.140826] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.140834] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.140845] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748140844, replica_locations:[]}) [2024-09-13 13:02:28.140858] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.140882] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:59, local_retry_times:59, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.140902] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.140907] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.140915] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.140919] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.140922] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.140937] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.140944] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.140983] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556018101, cache_obj->added_lc()=false, cache_obj->get_object_id()=435, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.141038] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.141986] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.142009] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=22][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.142125] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.142347] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.142360] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.142365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.142372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.142380] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748142379, replica_locations:[]}) [2024-09-13 13:02:28.142390] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.142396] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.142402] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.142410] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:28.142415] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:28.142423] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:28.142434] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:28.142449] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.142454] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.142459] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:28.142463] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:28.142467] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:28.142473] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.142479] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:28.142483] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:28.142487] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:28.142493] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:28.142497] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:28.142502] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:28.142512] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:28.142518] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.142524] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:28.142530] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:28.142537] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:28.142543] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=60, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.142560] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9] will sleep(sleep_us=60000, remain_us=94414, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:28.142600] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DC-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.153029] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=21] PNIO [ratelimit] time: 1726203748153027, bytes: 3485407, bw: 0.119469 MB/s, add_ts: 1006774, add_bytes: 126121 [2024-09-13 13:02:28.154831] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9F-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748154426) [2024-09-13 13:02:28.154855] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6A9F-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748154426}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.154882] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.154899] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748154867) [2024-09-13 13:02:28.154909] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203747954854, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.154930] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.154939] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.154946] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748154918) [2024-09-13 13:02:28.157768] INFO [SQL.EXE] run2 (ob_maintain_dependency_info_task.cpp:227) [19986][MaintainDepInfo][T0][Y0-0000000000000000-0-0] [lt=18] [ASYNC TASK QUEUE](queue_.size()=0, sys_view_consistent_.size()=0) [2024-09-13 13:02:28.158587] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.158890] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.158912] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.158921] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.158931] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.158950] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748158949, replica_locations:[]}) [2024-09-13 13:02:28.158964] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.158988] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:59, local_retry_times:59, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.159004] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.159012] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.159022] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.159032] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.159037] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.159051] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.159061] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.159111] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556036225, cache_obj->added_lc()=false, cache_obj->get_object_id()=436, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.160067] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.160091] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.160224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.160467] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.160486] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.160495] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.160505] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.160520] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748160519, replica_locations:[]}) [2024-09-13 13:02:28.160534] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.160545] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.160559] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.160571] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:28.160579] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:28.160590] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:28.160603] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:28.160616] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.160623] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.160630] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:28.160637] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:28.160642] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:28.160651] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.160662] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:28.160668] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:28.160674] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:28.160680] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:28.160686] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:28.160692] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:28.160705] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:28.160713] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.160720] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:28.160726] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:28.160733] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:28.160739] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=60, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.160760] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] will sleep(sleep_us=60000, remain_us=105720, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:28.174624] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=45] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:28.194471] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.195935] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.200676] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E2-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.202777] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.203312] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.203333] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.203358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=24] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.203366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.203379] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748203378, replica_locations:[]}) [2024-09-13 13:02:28.203391] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.203409] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:60, local_retry_times:60, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.203426] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.203432] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.203459] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=23][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.203470] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.203473] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.203487] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.203494] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.203535] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556080652, cache_obj->added_lc()=false, cache_obj->get_object_id()=437, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.204565] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.204592] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.204750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.204893] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=16] PNIO [ratelimit] time: 1726203748204891, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007606, add_bytes: 0 [2024-09-13 13:02:28.205008] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.205025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.205031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.205049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.205062] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748205061, replica_locations:[]}) [2024-09-13 13:02:28.205080] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.205095] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.205105] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.205114] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:28.205124] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:28.205133] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:28.205152] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:28.205166] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.205177] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:28.205185] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:28.205193] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:28.205200] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:28.205209] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.205222] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:28.205228] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:28.205234] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:28.205240] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:28.205246] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:28.205252] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:28.205264] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:28.205275] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.205282] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:28.205288] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:28.205295] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:28.205301] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=61, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.205336] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=26] will sleep(sleep_us=31637, remain_us=31637, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203748236973) [2024-09-13 13:02:28.212676] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=21][errcode=0] server is initiating(server_id=0, local_seq=40, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:28.214109] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=31] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:28.214149] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=38][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:28.214160] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=10][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:28.214174] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=13][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:28.214188] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=12][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:28.214195] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:28.214205] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:28.214212] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:28.214218] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=5][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:28.214233] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=14][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:28.214240] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:28.214248] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:28.214255] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:28.214262] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:28.214282] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=14][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:28.214294] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=11][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.214302] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.214314] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=11][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:28.214322] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, ret=-5019) [2024-09-13 13:02:28.214330] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=8][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:28.214339] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=8][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.214364] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=21][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:28.214385] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=17][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:28.214392] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:28.214397] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=6][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:28.214414] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=8][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:28.214423] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.214433] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7E-0-0] [lt=10][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-09-13 13:02:28.214456] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:28.214464] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:28.214474] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:28.214482] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203748213939, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:28.214498] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:28.214507] WDIAG get_my_sql_result_ (ob_table_access_helper.h:435) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x2b07c6c55878, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, columns_str="zone") [2024-09-13 13:02:28.214527] WDIAG read_and_convert_to_values_ (ob_table_access_helper.h:332) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-5019] fail to get ObMySQLResult(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:28.214632] WDIAG [COORDINATOR] get_self_zone_name (table_accessor.cpp:634) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-5019] get zone from __all_server failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", columns=0x2b07c6c55878, where_condition="where svr_ip='172.16.51.35' and svr_port=2882", zone_name_holder=) [2024-09-13 13:02:28.214662] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:567) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-5019] get self zone name failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:28.214672] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:576) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] zone name is empty(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:28.214680] WDIAG [COORDINATOR] refresh (ob_leader_coordinator.cpp:144) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] get all ls election reference info failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:28.214696] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:28.215703] INFO [CLOG.EXTLOG] resize_log_ext_handler_ (ob_cdc_service.cpp:649) [20225][T1_CdcSrv][T1][Y0-0000000000000000-0-0] [lt=10] finish to resize log external storage handler(current_ts=1726203748215700, tenant_max_cpu=2, valid_ls_v1_count=0, valid_ls_v2_count=0, other_ls_count=0, new_concurrency=0) [2024-09-13 13:02:28.216884] INFO [CLOG] run1 (ob_garbage_collector.cpp:1358) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=4] Garbage Collector is running(seq_=2, gc_interval=10000000) [2024-09-13 13:02:28.216923] INFO [CLOG] gc_check_member_list_ (ob_garbage_collector.cpp:1451) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=16] gc_check_member_list_ cost time(ret=0, time_us=23) [2024-09-13 13:02:28.216944] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1723) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=15] execute_gc cost time(ret=0, time_us=1) [2024-09-13 13:02:28.216955] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1723) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=6] execute_gc cost time(ret=0, time_us=0) [2024-09-13 13:02:28.216964] INFO [SERVER] handle (ob_safe_destroy_handler.cpp:240) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=7] ObSafeDestroyHandler start process [2024-09-13 13:02:28.216980] INFO [SERVER] loop (ob_safe_destroy_handler.cpp:133) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=7] ObSafeDestroyTaskQueue::loop begin(queue_.size()=0) [2024-09-13 13:02:28.217011] INFO [SERVER] loop (ob_safe_destroy_handler.cpp:140) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=11] ObSafeDestroyTaskQueue::loop finish(ret=0, queue_.size()=0) [2024-09-13 13:02:28.220994] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.221268] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.221286] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.221292] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.221300] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.221319] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748221317, replica_locations:[]}) [2024-09-13 13:02:28.221329] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.221343] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:60, local_retry_times:60, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:28.221355] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.221364] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.221373] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.221377] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:28.221381] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:28.221398] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:28.221406] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.221450] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556098559, cache_obj->added_lc()=false, cache_obj->get_object_id()=438, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.222217] INFO [CLOG] do_thread_task_ (ob_remote_fetch_log_worker.cpp:250) [20226][T1_RFLWorker][T1][YB42AC103323-000621F920860C7D-0-0] [lt=6] ObRemoteFetchWorker is running(thread_index=0) [2024-09-13 13:02:28.222320] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.222347] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.222461] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.222748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.222766] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.222773] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.222781] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.222791] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748222790, replica_locations:[]}) [2024-09-13 13:02:28.222801] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.222847] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0] will sleep(sleep_us=43634, remain_us=43634, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203748266480) [2024-09-13 13:02:28.224088] INFO [STORAGE.TRANS] run1 (ob_xa_trans_heartbeat_worker.cpp:84) [20243][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=12] XA scheduler heartbeat task statistics(avg_time=2) [2024-09-13 13:02:28.226885] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:28.226923] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:305) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=4] ====== traversal_flush timer task ====== [2024-09-13 13:02:28.226947] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:338) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=19] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:28.226981] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:28.227078] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:130) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=8] ====== checkpoint timer task ====== [2024-09-13 13:02:28.227107] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:193) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=22] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:28.228066] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:116) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=7] ====== [tabletchange] timer task ======(GC_CHECK_INTERVAL=5000000) [2024-09-13 13:02:28.228090] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:242) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=18] [tabletchange] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=2) [2024-09-13 13:02:28.229042] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=19] gc stale ls task succ [2024-09-13 13:02:28.229402] INFO [SQL.DTL] runTimerTask (ob_dtl_interm_result_manager.cpp:611) [20206][T1_TntSharedTim][T1][Y0-0000000000000000-0-0] [lt=6] clear dtl interm result cost(us)(clear_cost=3794, ret=0, gc_.expire_keys_.count()=0, dump count=0, clean count=0) [2024-09-13 13:02:28.229459] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] ====== check clog disk timer task ====== [2024-09-13 13:02:28.229475] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=13] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:28.229493] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:28.229520] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:39) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=6] ====== [emptytablet] empty shell timer task ======(GC_EMPTY_TABLET_SHELL_INTERVAL=5000000) [2024-09-13 13:02:28.229540] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:107) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=15] [emptytablet] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=2) [2024-09-13 13:02:28.230517] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:66) [20231][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=25] report RowHolderMapper summary info(count=0, bkt_cnt=248) [2024-09-13 13:02:28.233535] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=18] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:28.236772] INFO [DETECT] record_summary_info_and_logout_when_necessary_ (ob_lcl_batch_sender_thread.cpp:203) [20240][T1_LCLSender][T1][Y0-0000000000000000-0-0] [lt=23] ObLCLBatchSenderThread periodic report summary info(duty_ratio_percentage=0, total_constructed_detector=0, total_destructed_detector=0, total_alived_detector=0, _lcl_op_interval=30000, lcl_msg_map_.count()=0, *this={this:0x2b07c25fe2b0, is_inited:true, is_running:true, total_record_time:5010000, over_night_times:0}) [2024-09-13 13:02:28.237070] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203748236974, ctx_timeout_ts=1726203748236974, worker_timeout_ts=1726203748236973, default_timeout=1000000) [2024-09-13 13:02:28.237089] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=18][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.237107] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:28.237122] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.237141] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:28.237161] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.237172] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.237202] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.237257] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556114371, cache_obj->added_lc()=false, cache_obj->get_object_id()=439, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.237760] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:28.237780] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:28.237786] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:28.237799] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:28.237958] INFO [ARCHIVE] do_thread_task_ (ob_archive_fetcher.cpp:312) [20255][T1_ArcFetcher][T1][YB42AC103323-000621F920E60C7D-0-0] [lt=5] ObArchiveFetcher is running(thread_index=0) [2024-09-13 13:02:28.238507] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.238570] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=1][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203748236973, ctx_timeout_ts=1726203748236973, worker_timeout_ts=1726203748236973, default_timeout=1000000) [2024-09-13 13:02:28.238592] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=21][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.238605] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:28.238625] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=20][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.238638] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=12][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.238657] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=19][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:28.238697] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:28.238704] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:104) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=8] tx gc loop thread is running(MTL_ID()=1) [2024-09-13 13:02:28.238714] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.238717] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:111) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=12] try gc retain ctx [2024-09-13 13:02:28.238726] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.238754] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=7] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:28.238773] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:28.238786] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:28.238803] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.238812] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=8] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000875) [2024-09-13 13:02:28.238824] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C80-0-0] [lt=11][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:28.238835] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.238847] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:28.238855] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:28.238867] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4012] query failed(ret=-4012, conn=0x2b07a13e0060, start=1726203746237926, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.238891] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4012] read failed(ret=-4012) [2024-09-13 13:02:28.238901] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.238916] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.238940] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556116059, cache_obj->added_lc()=false, cache_obj->get_object_id()=441, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.239016] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:28.239029] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.239039] WDIAG [SHARE] get_snapshot_gc_scn (ob_global_stat_proxy.cpp:164) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:28.239050] WDIAG [STORAGE] get_global_info (ob_tenant_freeze_info_mgr.cpp:811) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] fail to get global info(ret=-4012, tenant_id=1) [2024-09-13 13:02:28.239061] WDIAG [STORAGE] try_update_info (ob_tenant_freeze_info_mgr.cpp:954) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4012] failed to get global info(ret=-4012) [2024-09-13 13:02:28.239072] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1008) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] fail to try update info(tmp_ret=-4012, tmp_ret="OB_TIMEOUT") [2024-09-13 13:02:28.239095] INFO [STORAGE] try_update_reserved_snapshot (ob_tenant_freeze_info_mgr.cpp:1044) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10] success to update min reserved snapshot(reserved_snapshot=0, duration=1800, snapshot_gc_ts_=0) [2024-09-13 13:02:28.239114] INFO [STORAGE] try_update_reserved_snapshot (ob_tenant_freeze_info_mgr.cpp:1071) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=13] update reserved snapshot finished(cost_ts=23, reserved_snapshot=0) [2024-09-13 13:02:28.239553] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.240546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.240881] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.246396] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C86-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.246850] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.246896] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=44][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.246912] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.246929] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.246971] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=12][errcode=0] server is initiating(server_id=0, local_seq=41, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:28.248427] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:28.248474] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=44][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:28.248486] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:28.248498] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:28.248511] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:28.248523] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:28.248537] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=10][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:28.248558] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=20][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:28.248566] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:28.248573] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:28.248584] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=10][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:28.248596] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:28.248606] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=9][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:28.248618] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:28.248635] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=13][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:28.248646] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=10][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.248656] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=8][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:28.248666] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:28.248679] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=12][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:28.248692] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=12][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:28.248705] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=12][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:28.248725] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=15][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:28.248744] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=16][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:28.248755] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=10][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:28.248765] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=9][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:28.248782] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:28.248796] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.248804] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:28.248816] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:28.248829] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=12][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:28.248841] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:28.248857] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=14][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203748248277, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:28.248871] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:28.248890] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=18][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:28.248955] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=13][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:28.248971] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=15][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:28.248985] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=14][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:28.248998] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=12][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:28.249011] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=11][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:28.249025] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=13][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:28.249036] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C86-0-0] [lt=10][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:28.254507] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:28.254533] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.254896] WDIAG [PALF] convert_to_ts (scn.cpp:265) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4016] invalid scn should not convert to ts (val_=18446744073709551615) [2024-09-13 13:02:28.254905] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA0-0-0] [lt=37][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748254493) [2024-09-13 13:02:28.254914] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:541) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0}, server_version_delta=1726203748254894, in_cluster_service=false, cluster_version={val:18446744073709551615, v:3}, min_cluster_version={val:18446744073709551615, v:3}, max_cluster_version={val:18446744073709551615, v:3}, get_cluster_version_err=0, cluster_version_delta=-1, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=1000000, local_cluster_version={val:0, v:0}, local_cluster_delta=1726203748254894, force_self_check=true, weak_read_refresh_interval=100000) [2024-09-13 13:02:28.254920] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA0-0-0] [lt=13][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748254493}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.254940] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:28.254963] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.254972] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.254977] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748254952) [2024-09-13 13:02:28.254988] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.255045] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748254985) [2024-09-13 13:02:28.255060] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748154916, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.255072] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.255086] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.255090] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748255067) [2024-09-13 13:02:28.256149] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.256394] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1966) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(dag_cnt=0, map_size=0) [2024-09-13 13:02:28.256418] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1976) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=22] dump_dag_status(running_dag_net_map_size=0, blocking_dag_net_list_size=0) [2024-09-13 13:02:28.256430] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(priority="PRIO_COMPACTION_HIGH", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256455] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=24] dump_dag_status(priority="PRIO_HA_HIGH", low_limit=8, up_limit=8, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256462] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status(priority="PRIO_COMPACTION_MID", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256471] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_HA_MID", low_limit=5, up_limit=5, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256479] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_COMPACTION_LOW", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256487] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_HA_LOW", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256495] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_DDL", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256503] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_DDL_HIGH", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256511] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(priority="PRIO_TTL", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:28.256520] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:0, sys_task_type:3, dag_type_str:"MINI_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:28.256537] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=16] dump_dag_status(type={init_dag_prio:0, sys_task_type:3, dag_type_str:"MINI_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:28.256545] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:2, sys_task_type:5, dag_type_str:"MINOR_EXECUTE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:28.256553] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:2, sys_task_type:5, dag_type_str:"MINOR_EXECUTE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:28.256561] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:4, sys_task_type:6, dag_type_str:"MAJOR_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:28.256570] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:4, sys_task_type:6, dag_type_str:"MAJOR_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:28.256576] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:0, sys_task_type:4, dag_type_str:"TX_TABLE_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:28.256584] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:0, sys_task_type:4, dag_type_str:"TX_TABLE_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:28.256591] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status(type={init_dag_prio:4, sys_task_type:7, dag_type_str:"WRITE_CKPT", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:28.256600] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(type={init_dag_prio:4, sys_task_type:7, dag_type_str:"WRITE_CKPT", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:28.256611] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:0, sys_task_type:19, dag_type_str:"MDS_TABLE_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:28.256647] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=35] dump_dag_status(type={init_dag_prio:0, sys_task_type:19, dag_type_str:"MDS_TABLE_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:28.256657] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"DDL", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:28.256669] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"DDL", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:28.256682] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"UNIQUE_CHECK", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:28.256693] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"UNIQUE_CHECK", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:28.256705] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"SQL_BUILD_INDEX", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:28.256721] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=15] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"SQL_BUILD_INDEX", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:28.256734] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:7, sys_task_type:12, dag_type_str:"DDL_KV_MERGE", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:28.256745] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:7, sys_task_type:12, dag_type_str:"DDL_KV_MERGE", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:28.256757] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256769] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256781] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256794] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=13] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256807] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256819] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256831] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256843] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256856] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=14] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256866] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256889] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=23] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"SYS_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256897] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"SYS_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256905] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256916] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256927] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"DATA_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256939] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"DATA_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256951] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_GROUP_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256965] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=14] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_GROUP_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.256974] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"MIGRATION_FINISH", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.256989] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=14] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"MIGRATION_FINISH", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257000] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.257013] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257025] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.257034] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257042] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.257050] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257058] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:3, sys_task_type:1, dag_type_str:"FAST_MIGRATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.257072] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=13] dump_dag_status(type={init_dag_prio:3, sys_task_type:1, dag_type_str:"FAST_MIGRATE", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257081] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(type={init_dag_prio:5, sys_task_type:1, dag_type_str:"VALIDATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:28.257094] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:1, dag_type_str:"VALIDATE", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257103] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"TABLET_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, dag_count=0) [2024-09-13 13:02:28.257111] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"TABLET_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, scheduled_task_count=0) [2024-09-13 13:02:28.257118] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"FINISH_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, dag_count=0) [2024-09-13 13:02:28.257127] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"FINISH_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, scheduled_task_count=0) [2024-09-13 13:02:28.257139] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_META", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257152] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_META", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257164] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_PREPARE", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257174] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_PREPARE", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257182] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_FINISH", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257190] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_FINISH", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257198] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_DATA", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257211] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_DATA", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257223] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"PREFETCH_BACKUP_INFO", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257235] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"PREFETCH_BACKUP_INFO", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257246] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_INDEX_REBUILD", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257258] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_INDEX_REBUILD", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257270] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_COMPLEMENT_LOG", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257278] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=8] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_COMPLEMENT_LOG", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257290] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:5, sys_task_type:10, dag_type_str:"BACKUP_BACKUPSET", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257310] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=20] dump_dag_status(type={init_dag_prio:5, sys_task_type:10, dag_type_str:"BACKUP_BACKUPSET", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257323] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:5, sys_task_type:11, dag_type_str:"BACKUP_ARCHIVELOG", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:28.257334] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:5, sys_task_type:11, dag_type_str:"BACKUP_ARCHIVELOG", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:28.257346] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_LS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257358] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_LS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257370] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_LS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257383] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_LS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257395] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"SYS_TABLETS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257407] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"SYS_TABLETS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257418] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"DATA_TABLETS_META_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257429] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"DATA_TABLETS_META_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257450] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=20] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_GROUP_META_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257461] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_GROUP_META_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257473] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_LS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257485] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_LS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257496] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257508] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257521] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257532] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257544] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257555] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257569] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=13] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:28.257581] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:28.257593] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:5, sys_task_type:15, dag_type_str:"BACKUP_CLEAN", dag_module_str:"BACKUP_CLEAN"}, dag_count=0) [2024-09-13 13:02:28.257603] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status(type={init_dag_prio:5, sys_task_type:15, dag_type_str:"BACKUP_CLEAN", dag_module_str:"BACKUP_CLEAN"}, scheduled_task_count=0) [2024-09-13 13:02:28.257614] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:3, sys_task_type:17, dag_type_str:"REMOVE_MEMBER", dag_module_str:"REMOVE_MEMBER"}, dag_count=0) [2024-09-13 13:02:28.257626] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:3, sys_task_type:17, dag_type_str:"REMOVE_MEMBER", dag_module_str:"REMOVE_MEMBER"}, scheduled_task_count=0) [2024-09-13 13:02:28.257638] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_BACKFILL_TX", dag_module_str:"TRANSFER"}, dag_count=0) [2024-09-13 13:02:28.257650] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_BACKFILL_TX", dag_module_str:"TRANSFER"}, scheduled_task_count=0) [2024-09-13 13:02:28.257661] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_REPLACE_TABLE", dag_module_str:"TRANSFER"}, dag_count=0) [2024-09-13 13:02:28.257673] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_REPLACE_TABLE", dag_module_str:"TRANSFER"}, scheduled_task_count=0) [2024-09-13 13:02:28.257684] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=11] dump_dag_status(type={init_dag_prio:8, sys_task_type:20, dag_type_str:"TTL_DELTE_DAG", dag_module_str:"TTL"}, dag_count=0) [2024-09-13 13:02:28.257696] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(type={init_dag_prio:8, sys_task_type:20, dag_type_str:"TTL_DELTE_DAG", dag_module_str:"TTL"}, scheduled_task_count=0) [2024-09-13 13:02:28.257709] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=10] dump_dag_status[DAG_NET](type="DAG_NET_MIGRATION", dag_net_count=0) [2024-09-13 13:02:28.257718] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status[DAG_NET](type="DAG_NET_PREPARE_MIGRATION", dag_net_count=0) [2024-09-13 13:02:28.257725] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status[DAG_NET](type="DAG_NET_COMPLETE_MIGRATION", dag_net_count=0) [2024-09-13 13:02:28.257735] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status[DAG_NET](type="DAG_NET_TRANSFER", dag_net_count=0) [2024-09-13 13:02:28.257742] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status[DAG_NET](type="DAG_NET_BACKUP", dag_net_count=0) [2024-09-13 13:02:28.257748] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status[DAG_NET](type="DAG_NET_RESTORE", dag_net_count=0) [2024-09-13 13:02:28.257755] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status[DAG_NET](type="DAG_NET_TYPE_BACKUP_CLEAN", dag_net_count=0) [2024-09-13 13:02:28.257763] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status[DAG_NET](type="DAG_NET_TRANSFER_BACKFILL_TX", dag_net_count=0) [2024-09-13 13:02:28.257770] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1996) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(total_worker_cnt=43, total_running_task_cnt=0, work_thread_num=43, scheduled_task_cnt=0) [2024-09-13 13:02:28.266582] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=19][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203748266481, ctx_timeout_ts=1726203748266481, worker_timeout_ts=1726203748266480, default_timeout=1000000) [2024-09-13 13:02:28.266604] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=21][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.266612] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:28.266621] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.266638] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=14][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:28.266654] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.266663] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.266681] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.266735] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556143852, cache_obj->added_lc()=false, cache_obj->get_object_id()=440, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.267588] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=1][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203748266480, ctx_timeout_ts=1726203748266480, worker_timeout_ts=1726203748266480, default_timeout=1000000) [2024-09-13 13:02:28.267609] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.267615] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:28.267623] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=7][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:28.267635] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=12][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:28.267656] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=20][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:28.267685] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:28.267699] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.267704] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.267725] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:28.267737] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:28.267750] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:28.267760] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.267766] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=4] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000520) [2024-09-13 13:02:28.267772] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:28.267779] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=5][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.267798] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=18][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:28.267802] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:28.267807] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:28.267819] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:28.267846] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C80-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556144967, cache_obj->added_lc()=false, cache_obj->get_object_id()=442, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.267911] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=13][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:28.267922] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=10][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.267926] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:28.267931] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=4][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:28.267949] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=16][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:28.267957] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:28.267963] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=6] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001486) [2024-09-13 13:02:28.267971] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=8][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:28.267980] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=7] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001510) [2024-09-13 13:02:28.267990] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=9][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:28.267995] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=5] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.267999] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C80-0-0] [lt=3][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:28.268004] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:28.268013] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:28.268032] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=4] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:28.268040] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=6] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:28.269663] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.269935] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.269953] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.269959] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.269970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.269983] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748269982, replica_locations:[]}) [2024-09-13 13:02:28.270029] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1998026, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.270164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.270342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.270354] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.270359] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.270366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.270388] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748270387, replica_locations:[]}) [2024-09-13 13:02:28.270400] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.270430] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.270448] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.270462] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.270490] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556147610, cache_obj->added_lc()=false, cache_obj->get_object_id()=443, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.271189] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.271428] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.271451] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.271466] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.271472] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.271479] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.271499] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748271498, replica_locations:[]}) [2024-09-13 13:02:28.271536] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1996519, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.271607] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.271627] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.271637] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.271649] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.271658] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.271669] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:28.271684] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:28.271691] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:28.271770] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.272112] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.272126] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.272136] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.272145] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.272156] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748272155, replica_locations:[]}) [2024-09-13 13:02:28.272173] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:28.272242] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:28.272467] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:28.272479] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] [2024-09-13 13:02:28.272587] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.272672] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.272851] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.272866] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.272898] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=31] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.272908] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.272917] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.272926] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:28.272935] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:28.272942] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:28.272997] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.273050] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.273069] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.273075] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.273086] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.273099] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748273099, replica_locations:[]}) [2024-09-13 13:02:28.273112] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.273132] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.273137] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.273174] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.273202] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556150322, cache_obj->added_lc()=false, cache_obj->get_object_id()=444, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.273336] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.273354] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.273363] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.273373] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.273381] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.273398] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:28.273405] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:28.273412] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:28.273484] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.273675] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.273688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.273697] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.273710] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.273718] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.273727] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:28.273735] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:28.273741] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:28.273749] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:28.273758] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:28.273765] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:28.273900] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.274140] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:28.274382] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.274398] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.274405] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.274414] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.274422] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748274421, replica_locations:[]}) [2024-09-13 13:02:28.274468] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993586, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.276606] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.276902] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.276917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.276939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.276948] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.276956] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748276956, replica_locations:[]}) [2024-09-13 13:02:28.276968] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.276985] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.276992] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.277009] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.277049] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556154170, cache_obj->added_lc()=false, cache_obj->get_object_id()=445, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.277685] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.277963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.277983] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.277999] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.278011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.278024] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748278023, replica_locations:[]}) [2024-09-13 13:02:28.278067] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1989987, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.279068] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:28.281162] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.281423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.281460] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.281466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.281475] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.281484] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748281483, replica_locations:[]}) [2024-09-13 13:02:28.281496] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.281518] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.281526] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.281544] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.281570] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556158691, cache_obj->added_lc()=false, cache_obj->get_object_id()=446, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.282181] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.282460] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.282480] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.282491] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.282505] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.282514] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748282514, replica_locations:[]}) [2024-09-13 13:02:28.282549] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1985505, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.286747] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.286965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.286980] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.286985] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.286995] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.287004] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748287003, replica_locations:[]}) [2024-09-13 13:02:28.287016] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.287032] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.287037] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.287052] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.287077] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556164198, cache_obj->added_lc()=false, cache_obj->get_object_id()=447, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.287727] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.288192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.288210] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.288230] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.288240] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.288251] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748288250, replica_locations:[]}) [2024-09-13 13:02:28.288286] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=5000, remain_us=1979768, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.293427] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.293753] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.293765] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.293771] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.293776] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.293788] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748293788, replica_locations:[]}) [2024-09-13 13:02:28.293800] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.293816] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.293824] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.293839] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.293864] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556170985, cache_obj->added_lc()=false, cache_obj->get_object_id()=448, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.294556] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.294782] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.294797] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.294803] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.294809] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.294825] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748294824, replica_locations:[]}) [2024-09-13 13:02:28.294855] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1973199, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.301019] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.301268] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.301286] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.301292] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.301302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.301310] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748301309, replica_locations:[]}) [2024-09-13 13:02:28.301322] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.301335] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:28.301349] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.301363] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.301375] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.301400] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556178521, cache_obj->added_lc()=false, cache_obj->get_object_id()=449, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.302060] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.302308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.302324] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.302330] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.302340] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.302347] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748302346, replica_locations:[]}) [2024-09-13 13:02:28.302381] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1965674, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.309571] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.309943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.309967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.309974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.309982] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.309993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748309992, replica_locations:[]}) [2024-09-13 13:02:28.310005] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.310021] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.310029] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.310043] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.310071] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556187191, cache_obj->added_lc()=false, cache_obj->get_object_id()=450, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.310796] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.311027] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.311054] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.311060] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.311070] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.311080] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748311080, replica_locations:[]}) [2024-09-13 13:02:28.311117] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1956937, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.315701] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.317129] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.319275] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.319774] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.319794] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.319801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.319811] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.319820] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748319820, replica_locations:[]}) [2024-09-13 13:02:28.319839] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.319857] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.319865] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.319897] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.319927] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556197047, cache_obj->added_lc()=false, cache_obj->get_object_id()=451, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.320662] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.320857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.320890] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.320900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.320911] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.320922] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748320921, replica_locations:[]}) [2024-09-13 13:02:28.320964] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1947090, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.327261] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20232][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4] init point thread id with(&point=0x55a3873cd680, idx_=3848, point=[thread id=20232, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20232) [2024-09-13 13:02:28.329737] WDIAG [ARCHIVE] do_thread_task_ (ob_archive_sender.cpp:256) [20256][T1_ArcSender][T1][YB42AC103323-000621F920F60C7D-0-0] [lt=20][errcode=-4018] try free send task failed(ret=-4018) [2024-09-13 13:02:28.330142] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.330354] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.330370] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.330376] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.330386] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.330398] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748330397, replica_locations:[]}) [2024-09-13 13:02:28.330410] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.330425] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.330433] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.330456] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.330493] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556207614, cache_obj->added_lc()=false, cache_obj->get_object_id()=452, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.331221] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.331478] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.331493] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.331502] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.331512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.331522] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748331521, replica_locations:[]}) [2024-09-13 13:02:28.331559] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1936495, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.332821] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=14] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:28.335998] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:28.336034] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:28.336050] INFO [STORAGE.TRANS] statistics (ob_gts_source.cpp:70) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=15] gts statistics(tenant_id=1, gts_rpc_cnt=0, get_gts_cache_cnt=8881, get_gts_with_stc_cnt=0, try_get_gts_cache_cnt=0, try_get_gts_with_stc_cnt=0, wait_gts_elapse_cnt=0, try_wait_gts_elapse_cnt=0) [2024-09-13 13:02:28.336047] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CB2-0-0] [lt=20][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203748336016}) [2024-09-13 13:02:28.336059] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:28.337454] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=17] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.338367] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=21][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.341750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.342050] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.342073] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.342092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.342104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.342115] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748342115, replica_locations:[]}) [2024-09-13 13:02:28.342128] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.342148] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.342156] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.342174] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.342203] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556219324, cache_obj->added_lc()=false, cache_obj->get_object_id()=453, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.343093] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.343299] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.343314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.343326] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.343337] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.343346] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748343346, replica_locations:[]}) [2024-09-13 13:02:28.343387] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1924667, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.348764] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:28.354590] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.354889] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.354913] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.354923] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.354934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.354951] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748354950, replica_locations:[]}) [2024-09-13 13:02:28.354970] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.354998] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.355009] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.355008] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA1-0-0] [lt=18][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748354584) [2024-09-13 13:02:28.355035] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.355026] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA1-0-0] [lt=17][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748354584}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.355045] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.355057] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.355065] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748355031) [2024-09-13 13:02:28.355076] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.355081] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556232197, cache_obj->added_lc()=false, cache_obj->get_object_id()=454, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.355088] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748355073) [2024-09-13 13:02:28.355096] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748255067, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.355109] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.355116] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.355119] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748355106) [2024-09-13 13:02:28.355533] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=14][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.356131] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.356338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.356371] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.356384] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.356391] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.356402] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748356401, replica_locations:[]}) [2024-09-13 13:02:28.356453] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1911601, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.360637] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B47-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:28.360655] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B47-0-0] [lt=16][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203748360221], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:28.361112] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD7-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:28.361772] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD7-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:28.368644] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.368952] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.368968] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.368974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.368981] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.368993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748368992, replica_locations:[]}) [2024-09-13 13:02:28.369013] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.369030] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.369038] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.369060] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.369109] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556246224, cache_obj->added_lc()=false, cache_obj->get_object_id()=455, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.370095] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.370278] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.370295] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.370301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.370312] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.370323] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748370322, replica_locations:[]}) [2024-09-13 13:02:28.370374] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1897681, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.377626] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.379013] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921990059-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.383575] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.383963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.383992] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.383998] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.384009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.384022] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748384021, replica_locations:[]}) [2024-09-13 13:02:28.384035] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.384064] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.384075] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.384099] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.384151] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556261267, cache_obj->added_lc()=false, cache_obj->get_object_id()=456, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.385542] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.385845] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.385867] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.385887] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.385899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.385915] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748385914, replica_locations:[]}) [2024-09-13 13:02:28.385973] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1882082, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.388008] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20] ====== tenant freeze timer task ====== [2024-09-13 13:02:28.388036] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=19][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:28.400164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=231][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.400457] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.400476] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.400483] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.400499] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.400510] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748400509, replica_locations:[]}) [2024-09-13 13:02:28.400523] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.400544] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.400551] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.400573] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.400612] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556277729, cache_obj->added_lc()=false, cache_obj->get_object_id()=457, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.401541] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.401731] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.401752] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.401774] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.401785] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.401800] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748401799, replica_locations:[]}) [2024-09-13 13:02:28.401855] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1866199, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.417024] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.417279] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.417304] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.417315] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.417326] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.417340] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748417339, replica_locations:[]}) [2024-09-13 13:02:28.417361] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.417394] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.417406] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.417473] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.417530] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556294644, cache_obj->added_lc()=false, cache_obj->get_object_id()=458, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.418582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=40][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.418789] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.418810] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.418819] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.418833] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.418846] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748418845, replica_locations:[]}) [2024-09-13 13:02:28.418916] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1849139, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.434210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169005F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.435123] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.435335] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.435358] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.435368] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.435381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.435398] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748435397, replica_locations:[]}) [2024-09-13 13:02:28.435418] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.435452] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.435464] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.435494] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.435545] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556312659, cache_obj->added_lc()=false, cache_obj->get_object_id()=459, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.436572] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.436803] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.436843] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.436857] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.436869] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.436896] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748436895, replica_locations:[]}) [2024-09-13 13:02:28.436971] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=17000, remain_us=1831084, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.454191] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.454491] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.454511] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.454519] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.454527] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.454559] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748454540, replica_locations:[]}) [2024-09-13 13:02:28.454586] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.454606] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.454630] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.454658] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.454703] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556331821, cache_obj->added_lc()=false, cache_obj->get_object_id()=460, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.455117] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA2-0-0] [lt=30][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748454654) [2024-09-13 13:02:28.455145] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.455167] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748455138) [2024-09-13 13:02:28.455144] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA2-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748454654}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.455180] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748355106, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.455204] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.455213] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.455218] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748455191) [2024-09-13 13:02:28.455259] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.455269] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.455275] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748455254) [2024-09-13 13:02:28.455884] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.456041] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.456060] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.456066] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.456076] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.456089] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748456088, replica_locations:[]}) [2024-09-13 13:02:28.456137] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1811917, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.467754] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.468138] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.468818] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.469108] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.469352] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.474341] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.474517] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=35] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:28.474677] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.474694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.474701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.474709] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.474720] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748474719, replica_locations:[]}) [2024-09-13 13:02:28.474738] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.474757] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.474764] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.474787] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.474828] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556351945, cache_obj->added_lc()=false, cache_obj->get_object_id()=461, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.476045] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.476198] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.476213] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.476219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.476227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.476235] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748476235, replica_locations:[]}) [2024-09-13 13:02:28.476303] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1791752, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.479155] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:28.495696] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.496081] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.496131] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=48][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.496144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.496155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.496170] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748496169, replica_locations:[]}) [2024-09-13 13:02:28.496187] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.496216] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.496225] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.496250] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.496319] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556373433, cache_obj->added_lc()=false, cache_obj->get_object_id()=462, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.497413] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.497804] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.497818] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.497825] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.497831] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.497840] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748497839, replica_locations:[]}) [2024-09-13 13:02:28.497897] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=20000, remain_us=1770157, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.518171] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=39][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.518535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.518560] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.518581] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.518593] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.518609] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748518608, replica_locations:[]}) [2024-09-13 13:02:28.518627] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.518687] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.518695] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.518720] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.518808] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556395886, cache_obj->added_lc()=false, cache_obj->get_object_id()=463, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.520668] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.520937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.520982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=44][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.521009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.521024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.521040] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748521039, replica_locations:[]}) [2024-09-13 13:02:28.521098] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=21000, remain_us=1746956, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.542337] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.542583] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.542609] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.542620] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.542633] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.542648] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748542647, replica_locations:[]}) [2024-09-13 13:02:28.542665] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.542702] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.542712] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.542741] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.542785] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556419902, cache_obj->added_lc()=false, cache_obj->get_object_id()=464, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.544140] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.544347] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.544371] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.544382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.544394] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.544409] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748544408, replica_locations:[]}) [2024-09-13 13:02:28.544482] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1723572, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.555240] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA3-0-0] [lt=34][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748554759) [2024-09-13 13:02:28.555273] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA3-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748554759}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.555291] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.555322] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748555283) [2024-09-13 13:02:28.555338] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748455189, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.555360] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.555376] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.555381] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748555347) [2024-09-13 13:02:28.566731] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.566977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.567002] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.567014] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.567027] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.567050] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748567049, replica_locations:[]}) [2024-09-13 13:02:28.567068] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.567098] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.567109] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.567155] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.567202] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556444319, cache_obj->added_lc()=false, cache_obj->get_object_id()=465, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.568642] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.568820] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.568841] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.568852] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.568864] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.568886] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748568886, replica_locations:[]}) [2024-09-13 13:02:28.568944] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1699110, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.592208] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.592433] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.592467] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.592479] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.592492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.592508] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748592507, replica_locations:[]}) [2024-09-13 13:02:28.592524] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.592550] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.592560] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.592591] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.592639] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556469756, cache_obj->added_lc()=false, cache_obj->get_object_id()=466, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.593719] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.593949] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.593982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.593994] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.594005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.594019] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748594018, replica_locations:[]}) [2024-09-13 13:02:28.594071] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=24000, remain_us=1673983, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.618374] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.618674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.618708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.618725] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.618743] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.618774] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748618773, replica_locations:[]}) [2024-09-13 13:02:28.618800] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.618834] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.618885] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=39][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.618927] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.618989] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556496103, cache_obj->added_lc()=false, cache_obj->get_object_id()=467, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.620389] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.620621] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.620649] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.620666] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.620683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.620710] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748620708, replica_locations:[]}) [2024-09-13 13:02:28.620783] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1647272, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.621935] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=64] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:28.646058] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.646417] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.646483] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=64][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.646501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.646538] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=34] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.646570] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748646568, replica_locations:[]}) [2024-09-13 13:02:28.646596] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.646659] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.646679] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.646719] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.646797] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556523909, cache_obj->added_lc()=false, cache_obj->get_object_id()=468, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.648177] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.648424] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.648471] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=47][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.648488] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.648506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.648525] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748648524, replica_locations:[]}) [2024-09-13 13:02:28.648592] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1619462, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.655346] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA4-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748654833) [2024-09-13 13:02:28.655363] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.655379] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.655388] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748655344) [2024-09-13 13:02:28.655403] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.655380] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA4-0-0] [lt=32][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748654833}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.655419] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748655398) [2024-09-13 13:02:28.655432] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748555345, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.655470] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.655476] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.655481] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748655466) [2024-09-13 13:02:28.674897] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.674912] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:28.675142] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.675172] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.675189] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.675207] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.675229] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748675228, replica_locations:[]}) [2024-09-13 13:02:28.675254] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.675288] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.675303] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.675334] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.675393] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556552506, cache_obj->added_lc()=false, cache_obj->get_object_id()=469, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.676743] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=51][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.677022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.677056] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.677098] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=40] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.677121] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.677142] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748677140, replica_locations:[]}) [2024-09-13 13:02:28.677234] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1590820, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.679238] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:28.704527] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.704770] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.704800] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.704817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.704835] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.704867] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748704866, replica_locations:[]}) [2024-09-13 13:02:28.704907] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=37] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.704940] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.704977] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=35][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.705025] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.705086] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556582199, cache_obj->added_lc()=false, cache_obj->get_object_id()=470, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.706367] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.706831] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.706860] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.706895] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=34] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.706916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.706945] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748706944, replica_locations:[]}) [2024-09-13 13:02:28.707014] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=28000, remain_us=1561041, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.726979] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:28.727041] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=33] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:28.735316] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.735525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=42][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.735587] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=60][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.735610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.735629] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.735675] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=37] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748735673, replica_locations:[]}) [2024-09-13 13:02:28.735702] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.735766] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.735783] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.735830] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.735917] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556613029, cache_obj->added_lc()=false, cache_obj->get_object_id()=471, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.737386] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.737573] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.737601] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.737617] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.737641] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.737660] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748737659, replica_locations:[]}) [2024-09-13 13:02:28.737727] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=29000, remain_us=1530328, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.745130] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:28.755377] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA5-0-0] [lt=45][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748754918) [2024-09-13 13:02:28.755416] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA5-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748754918}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.755453] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.755478] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.755488] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748755429) [2024-09-13 13:02:28.767002] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.767289] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.767330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.767347] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.767365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.767387] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748767386, replica_locations:[]}) [2024-09-13 13:02:28.767424] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.767481] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.767496] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.767534] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.767595] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556644708, cache_obj->added_lc()=false, cache_obj->get_object_id()=472, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.768887] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.769144] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.769167] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.769178] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.769188] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.769204] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748769203, replica_locations:[]}) [2024-09-13 13:02:28.769264] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1498791, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.799506] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.799773] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.799799] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.799808] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.799819] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.799836] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748799834, replica_locations:[]}) [2024-09-13 13:02:28.799857] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.799896] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.799909] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.799932] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.799989] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556677102, cache_obj->added_lc()=false, cache_obj->get_object_id()=473, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.801401] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.801642] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.801664] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.801673] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.801686] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.801701] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748801700, replica_locations:[]}) [2024-09-13 13:02:28.801763] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1466292, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.833010] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.833311] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.833338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.833347] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.833362] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.833391] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748833389, replica_locations:[]}) [2024-09-13 13:02:28.833410] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.833444] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.833457] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.833486] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.833546] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556710658, cache_obj->added_lc()=false, cache_obj->get_object_id()=474, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.834808] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.835042] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.835064] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.835072] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.835083] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.835106] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748835105, replica_locations:[]}) [2024-09-13 13:02:28.835170] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=32000, remain_us=1432885, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.836511] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:28.855457] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA6-0-0] [lt=42][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748854987) [2024-09-13 13:02:28.855495] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:28.855485] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA6-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748854987}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.855512] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.855541] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748855487) [2024-09-13 13:02:28.855557] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748655464, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.855585] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.855606] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.855617] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748855573) [2024-09-13 13:02:28.855635] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.855645] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.855655] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748855631) [2024-09-13 13:02:28.861097] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B48-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:28.861113] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B48-0-0] [lt=14][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203748860698], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:28.861601] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD8-0-0] [lt=14][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203748861196, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035455, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203748860141}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:28.861632] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD8-0-0] [lt=31][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:28.862181] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD8-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:28.867397] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.867651] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.867679] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.867700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.867715] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.867733] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748867733, replica_locations:[]}) [2024-09-13 13:02:28.867754] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.867783] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.867794] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.867821] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.867889] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556744991, cache_obj->added_lc()=false, cache_obj->get_object_id()=475, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.869142] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.869390] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.869459] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=68][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.869482] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.869494] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.869510] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748869509, replica_locations:[]}) [2024-09-13 13:02:28.869574] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1398481, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.873070] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.873325] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.873944] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:28.875241] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:28.879316] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=16] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:28.902782] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.903072] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.903090] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.903097] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.903104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.903117] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748903116, replica_locations:[]}) [2024-09-13 13:02:28.903137] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.903160] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.903169] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.903197] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.903243] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556780359, cache_obj->added_lc()=false, cache_obj->get_object_id()=476, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.904245] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.904455] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.904471] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.904478] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.904488] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.904499] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748904498, replica_locations:[]}) [2024-09-13 13:02:28.904553] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=34000, remain_us=1363502, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.938775] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.939072] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.939091] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.939098] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.939108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.939122] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748939121, replica_locations:[]}) [2024-09-13 13:02:28.939137] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.939157] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.939166] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.939184] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.939230] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556816347, cache_obj->added_lc()=false, cache_obj->get_object_id()=477, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.940214] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.940509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.940526] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.940532] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.940541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.940549] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748940549, replica_locations:[]}) [2024-09-13 13:02:28.940596] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1327459, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:28.955541] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA7-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748955062) [2024-09-13 13:02:28.955573] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA7-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203748955062}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:28.955590] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:28.955608] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203748955583) [2024-09-13 13:02:28.955617] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748855571, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:28.955644] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.955650] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:28.955655] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203748955632) [2024-09-13 13:02:28.975820] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.976197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.976221] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.976228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.976238] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.976250] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748976249, replica_locations:[]}) [2024-09-13 13:02:28.976265] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:28.976289] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:28.976297] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:28.976321] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:28.976367] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556853484, cache_obj->added_lc()=false, cache_obj->get_object_id()=478, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:28.977321] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:28.977533] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.977551] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:28.977561] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:28.977570] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:28.977581] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203748977580, replica_locations:[]}) [2024-09-13 13:02:28.977647] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1290407, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.013870] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.014199] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.014221] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.014228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.014244] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.014257] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749014256, replica_locations:[]}) [2024-09-13 13:02:29.014272] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.014295] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.014303] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.014332] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.014390] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556891505, cache_obj->added_lc()=false, cache_obj->get_object_id()=479, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.015399] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.015637] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.015657] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.015663] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.015680] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.015689] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749015688, replica_locations:[]}) [2024-09-13 13:02:29.015738] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1252316, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.053002] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.053320] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.053344] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.053351] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.053366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.053384] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749053383, replica_locations:[]}) [2024-09-13 13:02:29.053401] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.053424] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.053450] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.053473] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.053536] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556930652, cache_obj->added_lc()=false, cache_obj->get_object_id()=480, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.054784] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.055012] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.055034] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.055041] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.055061] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.055070] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749055069, replica_locations:[]}) [2024-09-13 13:02:29.055117] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1212937, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.055656] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.055682] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749055648) [2024-09-13 13:02:29.055692] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203748955630, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.055711] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.055720] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.055725] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749055699) [2024-09-13 13:02:29.075603] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:29.079398] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:29.092967] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=25] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.093396] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.093644] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=6] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.093630] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=20] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.093675] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.093695] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.093703] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.093713] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.093732] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749093731, replica_locations:[]}) [2024-09-13 13:02:29.093748] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.093772] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.093781] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.093809] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.093858] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6556970975, cache_obj->added_lc()=false, cache_obj->get_object_id()=481, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.094363] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=21] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.094646] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=20] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.094903] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=9] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.095134] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.095323] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.095385] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=23] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.095405] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.095429] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.095452] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.095463] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.095473] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749095472, replica_locations:[]}) [2024-09-13 13:02:29.095545] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1172510, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.096081] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.119017] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=21] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:29.129171] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=28][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1372, tid:19944}]) [2024-09-13 13:02:29.134786] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.134967] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC7E-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.135134] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.135222] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=86][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.135231] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.135244] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.135271] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749135269, replica_locations:[]}) [2024-09-13 13:02:29.135286] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.135306] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:39, local_retry_times:39, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.135324] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.135333] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.135344] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.135352] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.135356] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:29.135372] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:29.135384] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.135429] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557012545, cache_obj->added_lc()=false, cache_obj->get_object_id()=482, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.136404] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.136431] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.136529] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=39][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.136795] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.136806] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.136813] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.136822] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.136835] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749136834, replica_locations:[]}) [2024-09-13 13:02:29.136852] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.136900] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=47][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.136916] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.136928] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:29.136946] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:29.136955] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:29.136968] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:29.136978] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.136985] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.136994] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:29.137000] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:29.137004] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:29.137011] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:29.137023] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:29.137029] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:29.137035] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:29.137039] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:29.137047] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:29.137054] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:29.137066] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:29.137075] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.137083] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:29.137090] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:29.137096] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:29.137104] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=40, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.137126] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] will sleep(sleep_us=40000, remain_us=1130928, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.149323] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21D7-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.149914] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21DB-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.150230] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21DC-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.150717] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21E0-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.151045] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21E1-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.151569] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21E5-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.151891] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21E6-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.152384] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21EA-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.152694] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21EB-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.153173] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21EF-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.155562] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=17] PNIO [ratelimit] time: 1726203749155560, bytes: 3616049, bw: 0.124275 MB/s, add_ts: 1002533, add_bytes: 130642 [2024-09-13 13:02:29.155588] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA8-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749155176) [2024-09-13 13:02:29.155618] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA8-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749155176}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.155648] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.155664] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.155670] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749155630) [2024-09-13 13:02:29.177403] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.177644] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.177668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.177675] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.177683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.177698] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749177697, replica_locations:[]}) [2024-09-13 13:02:29.177721] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.177740] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:40, local_retry_times:40, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.177760] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.177766] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.177775] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.177779] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.177784] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:29.177799] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:29.177811] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.177857] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557054974, cache_obj->added_lc()=false, cache_obj->get_object_id()=483, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.178867] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.178907] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=39][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.179051] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.179271] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.179288] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.179297] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.179312] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.179329] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749179328, replica_locations:[]}) [2024-09-13 13:02:29.179348] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.179363] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.179375] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.179396] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:29.179407] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:29.179419] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:29.179444] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:29.179471] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.179479] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.179490] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:29.179499] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:29.179506] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:29.179518] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:29.179531] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:29.179543] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:29.179549] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:29.179559] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:29.179569] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:29.179576] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:29.179593] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:29.179605] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.179616] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:29.179626] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:29.179636] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:29.179646] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=41, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.179670] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] will sleep(sleep_us=41000, remain_us=1088385, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.202720] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E3-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.212525] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=21] PNIO [ratelimit] time: 1726203749212521, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007630, add_bytes: 0 [2024-09-13 13:02:29.213226] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:29.220931] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=42][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.221202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.221227] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.221233] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.221245] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.221259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749221257, replica_locations:[]}) [2024-09-13 13:02:29.221274] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.221294] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:41, local_retry_times:41, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.221316] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.221328] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.221344] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.221364] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.221369] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:29.221384] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:29.221395] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.221460] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557098557, cache_obj->added_lc()=false, cache_obj->get_object_id()=484, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.222485] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.222512] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.222728] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.222939] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.222955] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.222960] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.222977] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.222986] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749222986, replica_locations:[]}) [2024-09-13 13:02:29.223000] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.223010] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.223023] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.223039] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:29.223051] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:29.223063] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:29.223082] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:29.223097] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.223112] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.223123] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:29.223133] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:29.223143] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:29.223156] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:29.223165] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:29.223175] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:29.223196] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:29.223203] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:29.223212] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:29.223217] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:29.223230] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:29.223242] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.223247] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:29.223251] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:29.223260] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:29.223264] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=42, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.223301] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] will sleep(sleep_us=42000, remain_us=1044754, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.227099] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:29.227138] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:29.229106] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=15] gc stale ls task succ [2024-09-13 13:02:29.233637] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=25] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:29.237954] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:29.237977] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:29.237985] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:29.237992] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:29.249260] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C87-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.249511] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.249540] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.249555] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.249570] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.249606] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=42, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:29.250751] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=17] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:29.250780] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=26][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:29.250788] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:29.250806] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=18][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:29.250813] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:29.250817] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:29.250823] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:29.250829] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=5][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:29.250833] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:29.250838] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:29.250843] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:29.250850] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:29.250854] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:29.250859] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:29.250869] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:29.250891] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=20][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.250899] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.250907] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:29.250912] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:29.250920] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:29.250928] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.250941] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:29.250957] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=13][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:29.250962] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:29.250966] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:29.250983] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:29.250991] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.251000] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=8][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:29.251012] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:29.251023] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=10][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:29.251034] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:29.251041] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203749250591, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:29.251055] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=12][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:29.251065] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=9][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:29.251139] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=10][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:29.251155] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=15][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:29.251163] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=8][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:29.251170] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=6][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:29.251200] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=27][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:29.251208] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=7][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:29.251213] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C87-0-0] [lt=5][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:29.255695] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.255708] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA9-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749255255) [2024-09-13 13:02:29.255721] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749255688) [2024-09-13 13:02:29.255737] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749055698, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.255746] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:29.255727] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AA9-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749255255}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.255754] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:29.255776] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.255781] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.255785] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749255762) [2024-09-13 13:02:29.255796] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.255802] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.255807] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749255794) [2024-09-13 13:02:29.265539] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.265975] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.265997] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.266004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.266015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.266030] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749266029, replica_locations:[]}) [2024-09-13 13:02:29.266045] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.266064] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:42, local_retry_times:42, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.266079] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.266098] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.266116] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.266123] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.266127] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:29.266144] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:29.266154] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.266199] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557143315, cache_obj->added_lc()=false, cache_obj->get_object_id()=485, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.267173] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.267198] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.267309] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.267548] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.267565] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.267580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.267590] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.267601] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749267600, replica_locations:[]}) [2024-09-13 13:02:29.267614] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.267624] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.267632] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.267644] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:29.267649] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:29.267660] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:29.267679] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:29.267697] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.267709] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.267720] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:29.267730] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:29.267739] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:29.267751] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:29.267765] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:29.267771] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:29.267777] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:29.267782] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:29.267786] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:29.267791] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:29.267805] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:29.267814] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.267820] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:29.267825] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:29.267831] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:29.267835] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=43, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.267852] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] will sleep(sleep_us=43000, remain_us=1000202, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.275978] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:29.283402] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=25] Cache replace map node details(ret=0, replace_node_count=0, replace_time=3906, replace_start_pos=377484, replace_num=62914) [2024-09-13 13:02:29.283429] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:29.311056] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.311369] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.311392] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.311401] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.311413] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.311443] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749311442, replica_locations:[]}) [2024-09-13 13:02:29.311459] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.311479] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:43, local_retry_times:43, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.311498] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.311507] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.311517] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.311524] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.311531] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:29.311559] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:29.311570] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.311614] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557188732, cache_obj->added_lc()=false, cache_obj->get_object_id()=486, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.312770] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.312824] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=53][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.312994] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.313233] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.313249] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.313258] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.313269] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.313281] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749313280, replica_locations:[]}) [2024-09-13 13:02:29.313294] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.313304] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.313325] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.313337] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:29.313346] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:29.313355] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:29.313368] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:29.313378] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.313383] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.313391] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:29.313398] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:29.313405] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:29.313419] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:29.313428] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:29.313453] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:29.313460] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:29.313466] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:29.313474] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:29.313481] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:29.313495] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:29.313505] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.313513] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:29.313520] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:29.313528] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:29.313535] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=44, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.313561] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] will sleep(sleep_us=44000, remain_us=954494, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.336962] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.336994] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=31][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:29.337035] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:29.337048] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:29.337071] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:29.337086] WDIAG [STORAGE.TRANS] operator() (ob_ts_mgr.h:175) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4721] refresh gts failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:29.337098] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:29.337069] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CB5-0-0] [lt=17][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203749337007}) [2024-09-13 13:02:29.348859] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:29.355795] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAA-0-0] [lt=32][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749355336) [2024-09-13 13:02:29.355823] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAA-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749355336}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.355839] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.355889] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749355830) [2024-09-13 13:02:29.355902] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749255744, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.355927] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.355935] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.355944] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749355914) [2024-09-13 13:02:29.357780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.358080] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.358101] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.358111] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.358123] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.358138] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749358136, replica_locations:[]}) [2024-09-13 13:02:29.358168] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=28] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.358187] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:44, local_retry_times:44, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.358205] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.358214] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.358225] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.358233] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.358239] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:29.358261] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:29.358272] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.358318] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557235435, cache_obj->added_lc()=false, cache_obj->get_object_id()=487, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.359309] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.359333] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.359535] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.359756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.359773] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.359781] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.359792] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.359804] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749359803, replica_locations:[]}) [2024-09-13 13:02:29.359817] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.359827] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:29.359836] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:29.359867] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:29.359885] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:29.359897] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:29.359916] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:29.359931] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.359943] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:29.359954] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:29.359961] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:29.359965] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:29.359971] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:29.359987] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:29.359994] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:29.360001] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:29.360007] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:29.360015] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:29.360023] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:29.360037] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:29.360045] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:29.360054] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:29.360061] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:29.360070] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:29.360081] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=45, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:29.360112] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] will sleep(sleep_us=45000, remain_us=907943, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.361599] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B49-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:29.361618] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B49-0-0] [lt=18][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203749361177], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:29.362186] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD9-0-0] [lt=20][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203749361739, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035496, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203749360935}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:29.362227] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD9-0-0] [lt=40][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.362734] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DD9-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.405368] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.405649] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.405674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.405683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.405696] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.405712] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749405711, replica_locations:[]}) [2024-09-13 13:02:29.405727] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.405748] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:45, local_retry_times:45, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:29.405781] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=27][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.405790] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.405801] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.405808] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:29.405814] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:29.405832] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.405888] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557282996, cache_obj->added_lc()=false, cache_obj->get_object_id()=488, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.407064] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.407280] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.407298] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.407307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.407318] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.407341] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749407341, replica_locations:[]}) [2024-09-13 13:02:29.407395] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=46000, remain_us=860660, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.436063] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690060-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.452576] WDIAG [SHARE.SCHEMA] async_refresh_schema (ob_multi_version_schema_service.cpp:2414) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=20][errcode=-4012] already timeout(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:29.452596] WDIAG [SQL.EXE] try_refresh_schema_ (ob_remote_executor_processor.cpp:872) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=18][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, schema_version=1725265416329232, try_refresh_time=9994384) [2024-09-13 13:02:29.452604] WDIAG [SQL.EXE] base_before_process (ob_remote_executor_processor.cpp:109) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=7][errcode=-4012] fail to try refresh systenant schema(ret=-4012, ret="OB_TIMEOUT", sys_schema_version=1725265416329232, sys_local_version=1) [2024-09-13 13:02:29.452610] WDIAG [SQL.EXE] before_process (ob_remote_executor_processor.cpp:1126) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=5][errcode=-4012] base before process failed(ret=-4012) [2024-09-13 13:02:29.452616] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:84) [20294][T1_L0_G0][T1][YB42AC103326-00062119D7870BA5-0-0] [lt=5][errcode=-4012] before process fail(ret=-4012) [2024-09-13 13:02:29.452731] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=43, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:29.453418] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E6-0-0] [lt=33][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.453522] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=13][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:29.453585] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.453834] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.453855] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.453862] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.453882] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.453893] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749453892, replica_locations:[]}) [2024-09-13 13:02:29.453909] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.453932] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.453941] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.453975] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.454020] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557331136, cache_obj->added_lc()=false, cache_obj->get_object_id()=489, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.454125] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E6-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.454466] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E6-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.454935] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E6-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.455070] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.455218] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E6-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.455673] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E6-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.455911] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.455939] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.455952] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749455895) [2024-09-13 13:02:29.455942] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.455980] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.455991] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.456004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.456019] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749456018, replica_locations:[]}) [2024-09-13 13:02:29.456081] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=811974, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.476272] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=36] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:29.483538] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:29.489840] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=27][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:29.503328] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.503680] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.503708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.503720] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.503751] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.503768] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749503767, replica_locations:[]}) [2024-09-13 13:02:29.503792] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.503828] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.503843] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.503870] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.503933] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557381049, cache_obj->added_lc()=false, cache_obj->get_object_id()=490, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.505020] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.505278] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.505302] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.505313] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.505325] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.505339] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749505338, replica_locations:[]}) [2024-09-13 13:02:29.505421] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=762634, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.553665] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.553994] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.554024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.554035] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.554048] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.554066] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749554064, replica_locations:[]}) [2024-09-13 13:02:29.554082] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.554110] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.554121] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.554150] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.554216] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557431332, cache_obj->added_lc()=false, cache_obj->get_object_id()=491, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.555256] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.555499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.555533] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.555544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.555557] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.555571] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749555569, replica_locations:[]}) [2024-09-13 13:02:29.555647] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=712408, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.555938] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAB-0-0] [lt=7][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749555482) [2024-09-13 13:02:29.555945] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.555964] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749555937) [2024-09-13 13:02:29.555973] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749355912, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.555960] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAB-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749555482}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.556003] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.556014] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.556025] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749555988) [2024-09-13 13:02:29.556040] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.556049] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.556057] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749556037) [2024-09-13 13:02:29.604946] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.605395] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.605465] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=68][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.605486] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.605508] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.605537] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749605535, replica_locations:[]}) [2024-09-13 13:02:29.605565] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.605616] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.605633] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.605692] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.605774] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557482886, cache_obj->added_lc()=false, cache_obj->get_object_id()=492, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.607397] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.607752] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.607785] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.607797] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.607812] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.607830] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749607828, replica_locations:[]}) [2024-09-13 13:02:29.607912] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=660143, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.622608] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=48] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:29.656072] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAC-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749655562) [2024-09-13 13:02:29.656109] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.656104] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAC-0-0] [lt=31][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749655562}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.656131] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749656102) [2024-09-13 13:02:29.656140] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749555985, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.656168] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.656179] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.656185] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749656150) [2024-09-13 13:02:29.656205] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.656210] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.656220] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749656201) [2024-09-13 13:02:29.658156] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.658503] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.658527] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.658538] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.658552] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.658579] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749658577, replica_locations:[]}) [2024-09-13 13:02:29.658596] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.658622] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.658633] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.658659] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.658710] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557535826, cache_obj->added_lc()=false, cache_obj->get_object_id()=493, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.659755] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.659977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.659999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.660010] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.660021] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.660034] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749660034, replica_locations:[]}) [2024-09-13 13:02:29.660089] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=51000, remain_us=607965, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.676646] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:29.683660] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=29] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:29.711377] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.711830] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.711862] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.711885] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.711903] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.711921] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749711920, replica_locations:[]}) [2024-09-13 13:02:29.711956] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.711985] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.711996] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.712028] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.712080] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557589196, cache_obj->added_lc()=false, cache_obj->get_object_id()=494, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.713210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.713521] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.713545] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.713556] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.713568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.713582] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749713581, replica_locations:[]}) [2024-09-13 13:02:29.713649] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=554405, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.727182] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=13] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:29.727229] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=25] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:29.756227] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.756261] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749756218) [2024-09-13 13:02:29.756271] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749656146, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.756297] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.756304] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.756309] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749756278) [2024-09-13 13:02:29.765884] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.766355] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.766378] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.766390] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.766403] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.766420] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749766419, replica_locations:[]}) [2024-09-13 13:02:29.766447] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.766490] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.766501] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.766526] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.766552] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=21][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:29.766583] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557643693, cache_obj->added_lc()=false, cache_obj->get_object_id()=495, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.767597] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.767898] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.767921] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.767932] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.767943] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.767957] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749767956, replica_locations:[]}) [2024-09-13 13:02:29.768013] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=53000, remain_us=500042, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.821257] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.821743] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.821767] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.821790] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.821804] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.821821] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749821820, replica_locations:[]}) [2024-09-13 13:02:29.821856] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=33] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.821890] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.821905] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.821940] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.821990] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557699107, cache_obj->added_lc()=false, cache_obj->get_object_id()=496, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.823062] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.823382] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.823404] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.823418] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.823447] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.823467] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749823466, replica_locations:[]}) [2024-09-13 13:02:29.823534] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=444521, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.837614] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:29.837678] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:29.856282] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.856249] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAD-0-0] [lt=29][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749855717) [2024-09-13 13:02:29.856309] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.856317] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749856262) [2024-09-13 13:02:29.856333] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:29.856340] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.856305] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAD-0-0] [lt=54][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749855717}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.856373] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749856327) [2024-09-13 13:02:29.856386] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749756278, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.856410] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.856414] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.856417] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749856407) [2024-09-13 13:02:29.862142] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4A-0-0] [lt=17] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:29.862186] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4A-0-0] [lt=40][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203749861643], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:29.862662] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDA-0-0] [lt=11][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203749862290, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035523, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203749861373}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:29.862695] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDA-0-0] [lt=32][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.863375] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDA-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:29.872688] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.873535] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.873852] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=11] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:29.877004] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=36] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:29.877749] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.878238] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.878277] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.878310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=32] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.878330] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.878353] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749878352, replica_locations:[]}) [2024-09-13 13:02:29.878377] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.878415] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.878432] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.878510] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.878575] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557755687, cache_obj->added_lc()=false, cache_obj->get_object_id()=497, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.879683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.879988] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.880009] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.880020] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.880031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.880056] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749880055, replica_locations:[]}) [2024-09-13 13:02:29.880125] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=387929, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.882346] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5317, clean_start_pos=754974, clean_num=125829) [2024-09-13 13:02:29.883802] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=53] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:29.935325] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.935757] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.935780] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.935806] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.935818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.935834] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749935833, replica_locations:[]}) [2024-09-13 13:02:29.935851] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.935887] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.935902] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.935936] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.935992] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557813108, cache_obj->added_lc()=false, cache_obj->get_object_id()=498, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.937041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.937391] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.937412] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.937445] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=32] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.937457] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.937471] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749937470, replica_locations:[]}) [2024-09-13 13:02:29.937525] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=330530, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:29.956373] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAE-0-0] [lt=62][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749955873) [2024-09-13 13:02:29.956410] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.956431] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.956452] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749956392) [2024-09-13 13:02:29.956429] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAE-0-0] [lt=53][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203749955873}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:29.956467] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:29.956480] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203749956461) [2024-09-13 13:02:29.956490] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749856405, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:29.956500] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.956504] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:29.956507] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203749956497) [2024-09-13 13:02:29.993739] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.994238] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.994265] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.994286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.994307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.994325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749994324, replica_locations:[]}) [2024-09-13 13:02:29.994342] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:29.994368] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:29.994378] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:29.994403] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:29.994463] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557871580, cache_obj->added_lc()=false, cache_obj->get_object_id()=499, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:29.995483] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=40][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:29.995850] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.995904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=54][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:29.995931] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=26] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:29.995944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:29.995959] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203749995958, replica_locations:[]}) [2024-09-13 13:02:29.996023] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=272031, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:30.006948] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.053266] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.053547] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.053583] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.053594] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.053607] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.053625] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750053624, replica_locations:[]}) [2024-09-13 13:02:30.053642] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.053687] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.053698] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.053730] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.053781] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557930898, cache_obj->added_lc()=false, cache_obj->get_object_id()=500, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.054894] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.055085] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.055110] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.055120] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.055133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.055147] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750055146, replica_locations:[]}) [2024-09-13 13:02:30.055207] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=212848, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:30.056556] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:30.056591] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750056546) [2024-09-13 13:02:30.056607] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203749956495, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:30.056641] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.056655] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.056666] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750056623) [2024-09-13 13:02:30.071656] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.082711] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=19] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:30.083947] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=45] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:30.092750] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=24] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.092890] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.093707] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.094018] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=41] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.094361] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=21] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.094535] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.094866] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.094926] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.095183] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=20] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.113521] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.113949] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.113981] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.113992] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.114024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.114042] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750114041, replica_locations:[]}) [2024-09-13 13:02:30.114060] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.114086] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.114096] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.114122] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.114182] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6557991299, cache_obj->added_lc()=false, cache_obj->get_object_id()=501, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.115296] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.115507] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.115525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.115535] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.115547] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.115559] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750115559, replica_locations:[]}) [2024-09-13 13:02:30.115630] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1] will sleep(sleep_us=59000, remain_us=152425, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:30.119118] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=23] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:30.120034] WDIAG [SERVER] deliver_rpc_request (ob_srv_deliver.cpp:602) [19931][pnio1][T0][YB42AC103326-00062119EC0A118A-0-0] [lt=15][errcode=-5150] can't deliver request(req={packet:{hdr_:{checksum_:1218347312, pcode_:1316, hlen_:184, priority_:5, flags_:6151, tenant_id_:1001, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:1999029, timestamp:1726203750119653, dst_cluster_id:-1, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035529, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203749956042}, chid_:0, clen_:306, assemble:false, msg_count:0, payload:0}, type:0, group:0, sql_req_level:0, connection_phase:0, recv_timestamp_:1726203750120029, enqueue_timestamp_:0, request_arrival_time_:1726203750120029, trace_id_:Y0-0000000000000000-0-0}, ret=-5150) [2024-09-13 13:02:30.120100] WDIAG [SERVER] deliver (ob_srv_deliver.cpp:766) [19931][pnio1][T0][YB42AC103326-00062119EC0A118A-0-0] [lt=52][errcode=-5150] deliver rpc request fail(&req=0x2b07d9a0a098, ret=-5150) [2024-09-13 13:02:30.135171] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=19] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:30.135539] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC7F-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:30.137625] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.148267] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=44, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:30.149771] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=44] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_status, table_name.ptr()="data_size:15, data:5F5F616C6C5F6C735F737461747573", ret=-5019) [2024-09-13 13:02:30.149809] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=35][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_status, ret=-5019) [2024-09-13 13:02:30.149821] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=10][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_status, db_name=oceanbase) [2024-09-13 13:02:30.149835] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=13][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_status) [2024-09-13 13:02:30.149845] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=7][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:30.149851] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=5][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:30.149859] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_status' doesn't exist [2024-09-13 13:02:30.149865] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:30.149891] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=25][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:30.149896] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:30.149902] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:30.149908] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:30.149920] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=10][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:30.149925] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:30.149949] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=11][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:30.149960] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=10][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.149972] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.149983] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:30.149990] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=5][errcode=-5019] fail to handle text query(stmt=select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1, ret=-5019) [2024-09-13 13:02:30.150002] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=10][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:30.150008] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.150032] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=16][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:30.150057] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=19][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:30.150068] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=10][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:30.150072] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:30.150093] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=9][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:30.150105] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.150113] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20220][T1_DupTbLease][T1][YB42AC103323-000621F922160C7D-0-0] [lt=8][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1"}, aret=-5019, ret=-5019) [2024-09-13 13:02:30.150125] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1) [2024-09-13 13:02:30.150134] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:30.150145] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:30.150153] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203750149356, sql=select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1) [2024-09-13 13:02:30.150167] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:30.150180] WDIAG [SHARE] inner_get_ls_status_ (ob_ls_status_operator.cpp:949) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] failed to read(ret=-5019, ret="OB_TABLE_NOT_EXIST", exec_tenant_id=1, sql=select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1) [2024-09-13 13:02:30.150285] WDIAG [SHARE] get_duplicate_ls_status_info (ob_ls_status_operator.cpp:677) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] fail to inner get ls status info(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select * from __all_ls_status where tenant_id = 1 and flag like "%DUPLICATE%" order by ls_id limit 1, tenant_id=1, exec_tenant_id=1, need_member_list=false) [2024-09-13 13:02:30.150315] WDIAG [STORAGE.DUP_TABLE] refresh_dup_ls_ (ob_dup_table_util.cpp:115) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-5019] get duplicate ls status info failed(ret=-5019, tmp_dup_ls_status_info={tenant_id:0, ls_id:{id:-1}, ls_group_id:18446744073709551615, status:"UNKNOWN", unit_group_id:18446744073709551615, primary_zone:"", flag:{flag:0, is_duplicate:false, is_block_tablet_in:false}}) [2024-09-13 13:02:30.150352] WDIAG [STORAGE.DUP_TABLE] execute_for_dup_ls_ (ob_dup_table_util.cpp:207) [20220][T1_DupTbLease][T1][Y0-0000000000000000-0-0] [lt=35][errcode=0] refresh dup ls failed(tmp_ret=-5019, this={tenant_id_:1, dup_table_scan_timer_:0x2b07c3a0c928, dup_loop_worker_:0x2b07c3c11520, min_dup_ls_status_info_:{tenant_id:0, ls_id:{id:-1}, ls_group_id:18446744073709551615, status:"UNKNOWN", unit_group_id:18446744073709551615, primary_zone:"", flag:{flag:0, is_duplicate:false, is_block_tablet_in:false}}, tenant_schema_dup_tablet_set_.size():0, scan_task_execute_interval_:10000000, last_dup_ls_refresh_time_:0, last_dup_schema_refresh_time_:0, last_scan_task_succ_time_:0, max_execute_interval_:10000000}) [2024-09-13 13:02:30.156424] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAF-0-0] [lt=42][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750156006) [2024-09-13 13:02:30.156477] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AAF-0-0] [lt=45][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203750156006}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:30.156523] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.156542] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.156552] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750156498) [2024-09-13 13:02:30.160066] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=18] PNIO [ratelimit] time: 1726203750160064, bytes: 3675905, bw: 0.056827 MB/s, add_ts: 1004504, add_bytes: 59856 [2024-09-13 13:02:30.174902] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.175190] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.175230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.175241] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.175255] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.175272] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750175271, replica_locations:[]}) [2024-09-13 13:02:30.175291] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.175337] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.175349] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.175391] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.175462] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558052578, cache_obj->added_lc()=false, cache_obj->get_object_id()=502, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.176559] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.176760] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.176778] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.176788] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.176809] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.176823] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750176822, replica_locations:[]}) [2024-09-13 13:02:30.176890] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=91164, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:30.183008] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=42] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:30.204799] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E4-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.206781] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C84-0-0] [lt=15] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.214365] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=27] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:30.220142] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=26] PNIO [ratelimit] time: 1726203750220138, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007617, add_bytes: 0 [2024-09-13 13:02:30.227279] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:30.227332] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=27] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:30.229172] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=15] gc stale ls task succ [2024-09-13 13:02:30.229593] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=16] ====== check clog disk timer task ====== [2024-09-13 13:02:30.229625] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=27] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:30.229645] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=10] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:30.230627] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=25][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:481, tid:19944}]) [2024-09-13 13:02:30.233708] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=25] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:30.237189] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.237492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.237514] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.237521] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.237530] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.237544] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750237543, replica_locations:[]}) [2024-09-13 13:02:30.237560] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.237581] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:60, local_retry_times:60, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:30.237599] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.237607] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.237616] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.237623] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.237660] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=36][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:30.237677] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:30.237691] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.237739] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558114855, cache_obj->added_lc()=false, cache_obj->get_object_id()=503, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.238130] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:30.238145] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:30.238151] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:30.238161] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:30.238856] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.238908] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=51][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.239016] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.239492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.239509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.239514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.239523] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.239567] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=39] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750239566, replica_locations:[]}) [2024-09-13 13:02:30.239581] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.239592] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.239599] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.239611] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:30.239616] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:30.239625] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:30.239638] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:30.239651] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.239670] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=18][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.239682] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:30.239687] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:30.239692] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:30.239699] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.239708] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:30.239713] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:30.239718] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:30.239722] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:30.239728] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:30.239734] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:30.239745] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:30.239754] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.239768] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=12][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:30.239773] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:30.239782] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:30.239787] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=61, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.239808] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11] will sleep(sleep_us=28247, remain_us=28247, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203750268054) [2024-09-13 13:02:30.241416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.241832] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.242683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.243013] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.243275] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.251609] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C88-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.251944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.251987] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.251996] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.252065] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=64] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.252150] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=16][errcode=0] server is initiating(server_id=0, local_seq=45, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:30.253601] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=22] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:30.253635] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=30][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:30.253646] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=9][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:30.253656] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:30.253666] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:30.253679] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=12][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:30.253688] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:30.253694] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:30.253700] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:30.253705] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:30.253733] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=26][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:30.253740] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:30.253745] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:30.253751] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:30.253775] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=13][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:30.253786] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=10][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.253796] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.253808] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=10][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:30.253813] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:30.253827] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=11][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:30.253835] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.253859] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=17][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:30.253914] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=48][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:30.253919] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:30.253924] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:30.253941] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:30.253954] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.253960] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:30.253969] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:30.253977] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:30.253988] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:30.253996] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203750253265, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:30.254010] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:30.254030] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=16][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:30.254136] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:30.254151] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=14][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:30.254158] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:30.254163] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:30.254173] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=6][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:30.254186] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=12][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:30.254191] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C88-0-0] [lt=5][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:30.256523] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:30.256553] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750256513) [2024-09-13 13:02:30.256568] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203750056623, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:30.256583] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB0-0-0] [lt=39][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750256079) [2024-09-13 13:02:30.256597] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.256605] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.256610] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750256580) [2024-09-13 13:02:30.256603] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB0-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203750256079}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:30.256620] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:30.256627] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:30.256642] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.256648] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.256653] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750256639) [2024-09-13 13:02:30.257863] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:30.257905] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=40][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:30.257914] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:30.257921] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:30.257928] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:30.257945] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=16][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:30.257954] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=6][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:30.257968] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=12][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:30.257976] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:30.257990] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=12][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:30.257996] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=4][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:30.258007] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=9][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:30.258014] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=5][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:30.258023] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=8][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:30.258029] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=5][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:30.258038] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=10][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:30.258042] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:30.258065] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=15][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:30.258079] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=12][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.258095] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=14][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.258101] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:30.258112] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=10][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:30.258118] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:30.258129] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=10][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.258147] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=14][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:30.258160] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=10][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:30.258165] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:30.258168] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:30.258183] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:30.258197] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C81-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.258203] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C81-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:30.258210] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:30.258221] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:30.258226] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:30.258231] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203750257563, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:30.258251] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:30.258258] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:30.258266] INFO [STORAGE.TRANS] print_stat_ (ob_black_list.cpp:398) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6] start to print blacklist info [2024-09-13 13:02:30.258332] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=5] ls blacklist refresh finish(cost_time=1644) [2024-09-13 13:02:30.268172] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203750268055, ctx_timeout_ts=1726203750268055, worker_timeout_ts=1726203750268054, default_timeout=1000000) [2024-09-13 13:02:30.268217] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=45][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:30.268226] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:30.268238] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.268251] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:30.268266] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:61, local_retry_times:61, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:30.268289] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.268298] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.268311] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.268319] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.268323] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:30.268344] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:30.268359] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.268410] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558145525, cache_obj->added_lc()=false, cache_obj->get_object_id()=504, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.269549] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.269589] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=38][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.269603] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203750268054, ctx_timeout_ts=1726203750268054, worker_timeout_ts=1726203750268054, default_timeout=1000000) [2024-09-13 13:02:30.269621] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=17][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:30.269629] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:30.269644] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.269653] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.269682] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=28][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:30.269689] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4012, tmp_ret="OB_TIMEOUT", tablet_id={id:1}) [2024-09-13 13:02:30.269697] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:30.269712] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:30.269720] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.269726] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.269735] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:30.269738] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:30.269743] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:30.269749] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.269758] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:30.269789] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=30][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:30.269793] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:30.269798] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:30.269802] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:30.269809] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:30.269820] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:30.269830] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.269836] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:30.269841] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:30.269847] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:30.269854] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=62, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.269866] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=10][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:30.269888] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.269897] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.269924] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=6] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:30.269940] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.269944] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.269948] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:30.269959] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:30.269967] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.269977] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=8] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2001178) [2024-09-13 13:02:30.269985] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:30.269992] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.270013] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=20][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:30.270020] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:30.270028] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=7][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:30.270040] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.270079] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C81-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558147198, cache_obj->added_lc()=false, cache_obj->get_object_id()=505, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.270148] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:30.270158] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:30.270163] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:30.270168] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=4][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:30.270178] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:30.270193] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=14][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:30.270198] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=4] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2002146) [2024-09-13 13:02:30.270204] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=5][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:30.270211] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=5] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2002171) [2024-09-13 13:02:30.270220] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=9][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:30.270225] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=5] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:30.270232] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C81-0-0] [lt=6][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:30.270237] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:30.270245] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:30.270270] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=7] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:30.270279] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=7] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:30.271866] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.271930] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=62][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.272082] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.272575] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.272596] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.272602] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.272610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.272625] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750272624, replica_locations:[]}) [2024-09-13 13:02:30.272639] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.272649] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.272656] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.272676] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:30.272682] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:30.272688] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:30.272701] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:30.272708] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.272713] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.272718] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:30.272722] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:30.272726] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:30.272732] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.272738] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:30.272747] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:30.272751] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:30.272755] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:30.272759] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:30.272764] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:30.272774] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:30.272781] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.272786] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:30.272791] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:30.272797] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:30.272802] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.272821] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] already timeout, do not need sleep(sleep_us=0, remain_us=1997468, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.272935] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.273262] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.273275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.273280] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.273286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.273296] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750273295, replica_locations:[]}) [2024-09-13 13:02:30.273305] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.273320] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:30.273333] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.273337] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.273346] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.273360] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.273364] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:30.273374] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:30.273383] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.273413] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558150532, cache_obj->added_lc()=false, cache_obj->get_object_id()=506, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.273947] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.274155] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.274191] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.274198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.274209] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.274218] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.274228] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:30.274237] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:30.274245] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:30.274322] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.274422] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.274453] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=30][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.274488] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.274499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.274522] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.274504] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.274547] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=41] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.274563] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750274562, replica_locations:[]}) [2024-09-13 13:02:30.274580] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.274590] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.274599] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.274610] WDIAG [SERVER] refresh_sys_tenant_ls (ob_service.cpp:2354) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4721] fail to refresh sys tenant log stream(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, SYS_LS={id:1}) [2024-09-13 13:02:30.274619] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:30.274797] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:30.274808] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] [2024-09-13 13:02:30.274901] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.275044] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275076] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275082] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275100] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.275099] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275109] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275113] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750275112, replica_locations:[]}) [2024-09-13 13:02:30.275117] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275123] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.275124] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.275133] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.275132] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275140] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:30.275142] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.275150] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:30.275151] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:30.275155] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:30.275158] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:30.275160] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:30.275170] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:30.275179] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.275193] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.275199] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:30.275227] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:30.275231] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:30.275247] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.275268] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=36][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.275277] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:30.275282] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:30.275289] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:30.275293] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:30.275299] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:30.275307] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:30.275321] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:30.275335] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.275343] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:30.275347] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:30.275355] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:30.275379] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=1, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.275400] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] will sleep(sleep_us=1000, remain_us=1994889, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.275409] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275416] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275430] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275455] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.275459] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275472] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:30.275477] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:30.275481] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:30.275542] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.275690] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275700] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.275704] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275711] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.275715] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.275726] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:30.275730] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:30.275735] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:30.275740] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:30.275748] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:30.275752] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:30.276546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.276748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.276762] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.276767] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.276774] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.276782] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750276782, replica_locations:[]}) [2024-09-13 13:02:30.276804] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.276819] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:1, local_retry_times:1, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:30.276832] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.276838] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.276845] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.276852] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.276855] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:30.276871] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:30.276890] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.276926] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558154043, cache_obj->added_lc()=false, cache_obj->get_object_id()=507, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.277897] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.277932] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=34][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.278055] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.278263] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.278277] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.278282] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.278292] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.278303] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750278303, replica_locations:[]}) [2024-09-13 13:02:30.278316] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.278325] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.278334] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.278355] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:30.278360] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:30.278368] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:30.278377] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:30.278387] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.278392] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.278399] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:30.278403] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:30.278410] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:30.278415] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.278429] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:30.278433] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:30.278446] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:30.278450] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:30.278454] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:30.278458] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:30.278465] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:30.278471] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.278479] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:30.278482] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:30.278487] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:30.278491] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=2, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.278511] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] will sleep(sleep_us=2000, remain_us=1991778, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.280725] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.280978] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.280999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.281005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.281012] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.281021] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750281021, replica_locations:[]}) [2024-09-13 13:02:30.281034] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.281049] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:2, local_retry_times:2, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:30.281064] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.281069] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.281076] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.281092] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.281097] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:30.281110] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:30.281119] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.281150] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558158269, cache_obj->added_lc()=false, cache_obj->get_object_id()=508, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.281860] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.281897] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.281990] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.282211] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.282236] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.282248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.282276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.282292] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750282291, replica_locations:[]}) [2024-09-13 13:02:30.282311] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.282325] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.282337] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.282353] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:30.282365] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:30.282376] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:30.282390] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:30.282414] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.282471] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=57][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:30.282479] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:30.282486] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:30.282490] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:30.282499] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:30.282505] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:30.282508] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:30.282512] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:30.282519] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:30.282523] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:30.282530] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:30.282538] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:30.282554] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:30.282561] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:30.282565] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:30.282573] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:30.282581] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=3, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:30.282600] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] will sleep(sleep_us=3000, remain_us=1987688, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.283072] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=35] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:30.284075] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=44] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:30.285809] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.286040] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.286062] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.286075] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.286089] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.286105] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750286104, replica_locations:[]}) [2024-09-13 13:02:30.286127] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.286143] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:3, local_retry_times:3, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:30.286180] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=35][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.286189] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.286200] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.286207] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:30.286211] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:30.286231] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:30.286240] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.286269] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558163389, cache_obj->added_lc()=false, cache_obj->get_object_id()=509, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.286943] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.286969] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.287065] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.287243] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.287256] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.287262] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.287271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.287281] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750287280, replica_locations:[]}) [2024-09-13 13:02:30.287293] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.287302] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:30.287316] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:30.287327] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:30.287355] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=4000, remain_us=1982934, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.291551] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.291763] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.291781] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.291787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.291798] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.291810] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750291809, replica_locations:[]}) [2024-09-13 13:02:30.291823] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.291841] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.291855] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.291883] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.291921] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558169039, cache_obj->added_lc()=false, cache_obj->get_object_id()=510, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.292801] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.293091] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.293106] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.293112] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.293119] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.293128] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750293127, replica_locations:[]}) [2024-09-13 13:02:30.293169] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1977120, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.298363] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=41][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.298645] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.298664] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.298669] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.298679] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.298688] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750298688, replica_locations:[]}) [2024-09-13 13:02:30.298701] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.298718] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.298727] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.298749] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.298786] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558175904, cache_obj->added_lc()=false, cache_obj->get_object_id()=511, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.299670] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.299870] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.299920] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=50][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.299926] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.299936] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.299947] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750299946, replica_locations:[]}) [2024-09-13 13:02:30.300003] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=6000, remain_us=1970285, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.306208] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.306459] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.306477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.306483] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.306489] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.306502] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750306502, replica_locations:[]}) [2024-09-13 13:02:30.306526] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.306546] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.306555] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.306576] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.306620] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558183738, cache_obj->added_lc()=false, cache_obj->get_object_id()=512, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.307545] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.307914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.307934] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.307940] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.307951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.307959] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750307958, replica_locations:[]}) [2024-09-13 13:02:30.308012] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1962277, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.315194] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.315506] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.315523] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.315529] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.315539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.315550] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750315550, replica_locations:[]}) [2024-09-13 13:02:30.315563] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.315582] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.315591] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.315622] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.315664] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558192782, cache_obj->added_lc()=false, cache_obj->get_object_id()=513, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.316582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.316893] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.316914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.316920] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.316927] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.316936] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750316935, replica_locations:[]}) [2024-09-13 13:02:30.316977] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1953311, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.325210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.325515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.325536] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.325542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.325566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.325579] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750325579, replica_locations:[]}) [2024-09-13 13:02:30.325593] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.325613] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.325622] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.325644] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.325686] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558202804, cache_obj->added_lc()=false, cache_obj->get_object_id()=514, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.326717] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.327000] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.327022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.327028] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.327047] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.327056] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750327055, replica_locations:[]}) [2024-09-13 13:02:30.327105] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1943184, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.336318] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.336631] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.336652] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.336659] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.336670] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.336683] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750336682, replica_locations:[]}) [2024-09-13 13:02:30.336698] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.336721] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.336741] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.336770] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.336814] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558213930, cache_obj->added_lc()=false, cache_obj->get_object_id()=515, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.337770] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.338010] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.338030] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.338037] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.338044] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.338052] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750338051, replica_locations:[]}) [2024-09-13 13:02:30.338103] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1932186, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.338121] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:30.338138] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:30.338150] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:30.338175] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CBB-0-0] [lt=40][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203750338097}) [2024-09-13 13:02:30.348359] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.348629] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.348668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.348694] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.348705] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.348721] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750348720, replica_locations:[]}) [2024-09-13 13:02:30.348736] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.348758] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.348767] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.348790] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.348836] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558225953, cache_obj->added_lc()=false, cache_obj->get_object_id()=516, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.348971] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=36] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:30.349828] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.350171] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.350211] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.350218] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.350228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.350240] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750350239, replica_locations:[]}) [2024-09-13 13:02:30.350294] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1919995, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.351095] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=17] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.351926] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.356712] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:30.356761] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750356698) [2024-09-13 13:02:30.356771] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203750256579, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:30.356796] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.356806] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.356811] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750356781) [2024-09-13 13:02:30.361556] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.361900] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.361924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.361931] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.361939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.361951] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750361950, replica_locations:[]}) [2024-09-13 13:02:30.362005] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=51] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.362030] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.362046] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.362079] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.362150] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558239267, cache_obj->added_lc()=false, cache_obj->get_object_id()=517, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.362545] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4B-0-0] [lt=20] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:30.362563] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4B-0-0] [lt=17][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203750362130], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:30.363035] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDB-0-0] [lt=25][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:30.363380] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.363600] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.363618] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.363626] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.363650] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.363661] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750363661, replica_locations:[]}) [2024-09-13 13:02:30.363691] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDB-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203750363405, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035547, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203750362954}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:30.363722] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=12000, remain_us=1906567, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.363726] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDB-0-0] [lt=35][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:30.369420] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.376054] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.376263] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.376295] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.376305] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.376319] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.376337] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750376336, replica_locations:[]}) [2024-09-13 13:02:30.376361] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.376391] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.376418] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.376460] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.376520] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558253632, cache_obj->added_lc()=false, cache_obj->get_object_id()=518, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.377658] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=37][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.377937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.377957] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.377963] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.377973] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.377985] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750377984, replica_locations:[]}) [2024-09-13 13:02:30.378037] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1892252, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.389314] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] ====== tenant freeze timer task ====== [2024-09-13 13:02:30.389390] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=42][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:30.391270] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.391599] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.391643] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=42][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.391655] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.391671] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.391690] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750391688, replica_locations:[]}) [2024-09-13 13:02:30.391710] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.391741] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.391754] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.391793] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.391851] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558268964, cache_obj->added_lc()=false, cache_obj->get_object_id()=519, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.393044] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.393319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.393351] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.393357] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.393365] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.393375] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750393374, replica_locations:[]}) [2024-09-13 13:02:30.393426] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1876863, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.407755] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.408037] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.408062] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.408069] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.408077] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.408092] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750408091, replica_locations:[]}) [2024-09-13 13:02:30.408125] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=31] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.408144] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:30.408163] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.408172] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.408195] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.408241] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558285358, cache_obj->added_lc()=false, cache_obj->get_object_id()=520, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.409277] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.409518] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.409543] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.409553] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.409565] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.409583] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750409582, replica_locations:[]}) [2024-09-13 13:02:30.409640] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1860649, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.424968] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.425205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.425233] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.425241] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.425253] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.425266] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750425265, replica_locations:[]}) [2024-09-13 13:02:30.425281] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.425307] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.425316] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.425368] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.425416] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558302534, cache_obj->added_lc()=false, cache_obj->get_object_id()=521, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.426454] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.426665] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.426688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.426699] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.426711] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.426725] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750426724, replica_locations:[]}) [2024-09-13 13:02:30.426793] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1843496, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.438237] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690061-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.443020] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.443266] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.443293] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.443331] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=36] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.443344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.443362] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750443360, replica_locations:[]}) [2024-09-13 13:02:30.443379] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.443404] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.443415] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.443469] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.443527] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558320644, cache_obj->added_lc()=false, cache_obj->get_object_id()=522, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.444624] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.444932] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.444980] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.445011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=30] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.445030] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.445050] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750445049, replica_locations:[]}) [2024-09-13 13:02:30.445119] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1825170, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.456718] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB1-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750456224) [2024-09-13 13:02:30.456747] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.456760] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.456768] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750456728) [2024-09-13 13:02:30.456781] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.456751] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB1-0-0] [lt=31][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203750456224}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:30.456786] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.456790] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750456778) [2024-09-13 13:02:30.462394] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.462629] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.462669] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.462704] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=34] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.462723] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.462747] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750462746, replica_locations:[]}) [2024-09-13 13:02:30.462769] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.462796] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.462808] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.462848] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.462906] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558340022, cache_obj->added_lc()=false, cache_obj->get_object_id()=523, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.464042] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.464260] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.464280] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.464302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.464314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.464329] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750464328, replica_locations:[]}) [2024-09-13 13:02:30.464394] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1805895, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.468424] INFO [LIB] log_compress_loop_ (ob_log_compressor.cpp:393) [19885][SyslogCompress][T0][Y0-0000000000000000-0-0] [lt=21] log compressor cycles once. (ret=0, cost_time=0, compressed_file_count=0, deleted_file_count=0, disk_remaining_size=182289358848) [2024-09-13 13:02:30.469777] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.470136] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.470800] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.471118] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.471365] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.482632] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.483106] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.483135] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.483147] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.483161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.483177] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750483176, replica_locations:[]}) [2024-09-13 13:02:30.483195] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.483241] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.483251] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.483285] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.483347] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558360461, cache_obj->added_lc()=false, cache_obj->get_object_id()=524, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.483405] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:30.484210] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=45] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:30.484515] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.484866] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.484911] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=44][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.484924] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.484937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.484952] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750484950, replica_locations:[]}) [2024-09-13 13:02:30.485017] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=19000, remain_us=1785271, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.504301] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.504551] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.504590] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.504602] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.504615] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.504633] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750504632, replica_locations:[]}) [2024-09-13 13:02:30.504650] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.504678] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.504697] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.504721] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.504770] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558381886, cache_obj->added_lc()=false, cache_obj->get_object_id()=525, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.505896] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.506140] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.506168] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.506181] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.506197] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.506214] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750506213, replica_locations:[]}) [2024-09-13 13:02:30.506280] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1764008, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.526539] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.526786] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.526811] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.526823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.526844] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.526867] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750526866, replica_locations:[]}) [2024-09-13 13:02:30.526896] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.526923] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.526933] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.526961] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.527008] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558404126, cache_obj->added_lc()=false, cache_obj->get_object_id()=526, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.528110] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.528334] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.528355] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.528366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.528379] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.528402] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750528402, replica_locations:[]}) [2024-09-13 13:02:30.528494] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1741795, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.549735] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.550031] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.550058] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.550070] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.550083] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.550100] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750550099, replica_locations:[]}) [2024-09-13 13:02:30.550117] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.550144] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.550184] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=38][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.550210] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.550293] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558427406, cache_obj->added_lc()=false, cache_obj->get_object_id()=527, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.551459] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.551666] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.551687] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.551697] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.551709] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.551737] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750551736, replica_locations:[]}) [2024-09-13 13:02:30.551790] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1718498, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.556858] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:30.556891] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750556850) [2024-09-13 13:02:30.556901] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203750356780, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:30.556923] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.556932] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.556937] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750556908) [2024-09-13 13:02:30.574061] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.574322] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.574363] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.574375] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.574389] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.574406] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750574405, replica_locations:[]}) [2024-09-13 13:02:30.574423] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.574457] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.574468] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.574499] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.574549] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558451665, cache_obj->added_lc()=false, cache_obj->get_object_id()=528, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.575583] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.575861] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.575905] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.575925] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.575937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.575950] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750575949, replica_locations:[]}) [2024-09-13 13:02:30.576002] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=23000, remain_us=1694287, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.599255] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.599534] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.599558] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.599569] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.599581] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.599598] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750599597, replica_locations:[]}) [2024-09-13 13:02:30.599621] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.599647] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.599657] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.599690] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.599739] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558476855, cache_obj->added_lc()=false, cache_obj->get_object_id()=529, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.600793] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.601070] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.601090] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.601101] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.601112] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.601124] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750601124, replica_locations:[]}) [2024-09-13 13:02:30.601198] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1669090, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.623344] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=45] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:30.625473] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.625823] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.625864] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.625884] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.625903] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.625924] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750625923, replica_locations:[]}) [2024-09-13 13:02:30.625943] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.625977] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.625988] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.626021] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.626070] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558503187, cache_obj->added_lc()=false, cache_obj->get_object_id()=530, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.627159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.627418] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.627465] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.627477] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.627488] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.627501] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750627500, replica_locations:[]}) [2024-09-13 13:02:30.627555] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1642734, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.652815] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.653155] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.653189] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.653219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=28] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.653237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.653256] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750653255, replica_locations:[]}) [2024-09-13 13:02:30.653278] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.653304] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.653314] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.653337] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.653383] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558530500, cache_obj->added_lc()=false, cache_obj->get_object_id()=531, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.654448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.654631] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.654653] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.654663] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.654675] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.654704] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750654703, replica_locations:[]}) [2024-09-13 13:02:30.654774] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=26000, remain_us=1615515, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.656792] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB2-0-0] [lt=39][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750656353) [2024-09-13 13:02:30.656822] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB2-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203750656353}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:30.656915] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.656931] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.656938] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750656899) [2024-09-13 13:02:30.681227] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.681521] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.681556] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.681573] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.681611] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=35] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.681635] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750681634, replica_locations:[]}) [2024-09-13 13:02:30.681671] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=33] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.681707] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.681722] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.681763] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.681827] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558558940, cache_obj->added_lc()=false, cache_obj->get_object_id()=532, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.683105] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.683336] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.683361] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.683372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.683383] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.683396] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750683396, replica_locations:[]}) [2024-09-13 13:02:30.683491] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1586798, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.683782] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=45] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:30.684342] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=46] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:30.710806] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.711125] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.711160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.711172] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.711192] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.711210] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750711209, replica_locations:[]}) [2024-09-13 13:02:30.711241] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=28] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.711275] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.711304] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.711334] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.711394] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558588508, cache_obj->added_lc()=false, cache_obj->get_object_id()=533, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.712633] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.712831] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.712861] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.712871] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.712900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.712913] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750712912, replica_locations:[]}) [2024-09-13 13:02:30.713001] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1557288, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.727357] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=26] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:30.727407] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=28] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:30.741407] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.741720] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.741750] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.741757] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.741766] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.741779] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750741778, replica_locations:[]}) [2024-09-13 13:02:30.741792] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.741817] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.741830] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.741851] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.741913] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558619029, cache_obj->added_lc()=false, cache_obj->get_object_id()=534, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.742984] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.743189] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.743219] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.743226] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.743233] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.743243] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750743242, replica_locations:[]}) [2024-09-13 13:02:30.743316] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1526972, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.756966] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:30.756995] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750756959) [2024-09-13 13:02:30.757005] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203750556908, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:30.757026] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.757034] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.757040] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750757012) [2024-09-13 13:02:30.758753] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:30.772531] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.772831] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.772857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.772864] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.772885] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.772897] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750772896, replica_locations:[]}) [2024-09-13 13:02:30.772923] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.772945] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.772952] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.772993] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.773047] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558650163, cache_obj->added_lc()=false, cache_obj->get_object_id()=535, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.774098] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.774313] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.774337] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.774343] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.774351] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.774368] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750774367, replica_locations:[]}) [2024-09-13 13:02:30.774447] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1495842, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.805229] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.805467] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.805524] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=56][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.805538] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.805556] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.805583] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750805581, replica_locations:[]}) [2024-09-13 13:02:30.805614] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.805674] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.805689] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.805728] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.805886] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=64][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558682982, cache_obj->added_lc()=false, cache_obj->get_object_id()=536, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.807625] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.807946] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.807974] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.808005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=30] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.808018] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.808047] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750808046, replica_locations:[]}) [2024-09-13 13:02:30.808133] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=31000, remain_us=1462156, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.838651] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:30.838717] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:30.839485] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.839914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.839959] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.839972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.840003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.840028] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750840026, replica_locations:[]}) [2024-09-13 13:02:30.840050] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.840105] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.840119] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.840211] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.840318] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558717421, cache_obj->added_lc()=false, cache_obj->get_object_id()=537, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.842213] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.842491] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.842519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.842530] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.842553] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.842572] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750842571, replica_locations:[]}) [2024-09-13 13:02:30.842663] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1427626, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.857017] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB3-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750856491) [2024-09-13 13:02:30.857026] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:30.857049] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:30.857042] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB3-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203750856491}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:30.857081] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750857018) [2024-09-13 13:02:30.857095] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203750757011, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:30.857119] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.857130] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.857136] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750857105) [2024-09-13 13:02:30.857157] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.857166] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.857171] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750857155) [2024-09-13 13:02:30.863114] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4C-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:30.863139] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4C-0-0] [lt=24][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203750862590], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:30.863593] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDC-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:30.864354] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDC-0-0] [lt=26][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203750864011, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035562, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203750863504}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:30.864384] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDC-0-0] [lt=29][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:30.872936] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=21] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.872969] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.873584] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=11] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:30.874970] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.875259] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.875285] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.875336] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.875352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.875371] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750875369, replica_locations:[]}) [2024-09-13 13:02:30.875386] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.875429] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=2][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.875446] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.875482] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.875567] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558752672, cache_obj->added_lc()=false, cache_obj->get_object_id()=538, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.877372] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.877679] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.877700] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.877708] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.877723] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.877734] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750877733, replica_locations:[]}) [2024-09-13 13:02:30.877811] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1392478, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.884140] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=52] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:30.884473] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=47] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:30.911277] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.911475] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.911515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.911524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.911539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.911563] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750911562, replica_locations:[]}) [2024-09-13 13:02:30.911605] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=40] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.911648] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.911662] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.911707] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.911813] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558788915, cache_obj->added_lc()=false, cache_obj->get_object_id()=539, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.913635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.913904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.913932] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.913943] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.913956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.913974] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750913973, replica_locations:[]}) [2024-09-13 13:02:30.914066] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1356223, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.948540] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.948906] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.948944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.948952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.948968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.948991] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750948990, replica_locations:[]}) [2024-09-13 13:02:30.949013] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.949055] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=2][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.949068] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.949112] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.949225] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558826328, cache_obj->added_lc()=false, cache_obj->get_object_id()=540, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.951106] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.951335] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.951360] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.951369] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.951381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.951393] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750951392, replica_locations:[]}) [2024-09-13 13:02:30.951496] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1318793, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:30.957031] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB4-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203750956570) [2024-09-13 13:02:30.957061] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB4-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203750956570}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:30.957091] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.957108] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:30.957117] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203750957074) [2024-09-13 13:02:30.986848] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.987250] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.987287] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.987295] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.987310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.987333] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750987331, replica_locations:[]}) [2024-09-13 13:02:30.987354] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:30.987394] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:30.987407] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:30.987482] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:30.987585] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558864688, cache_obj->added_lc()=false, cache_obj->get_object_id()=541, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:30.989455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:30.989665] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.989688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:30.989695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:30.989708] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:30.989724] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203750989723, replica_locations:[]}) [2024-09-13 13:02:30.989806] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1280483, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.026128] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.026515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.026557] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=41][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.026590] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=32] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.026607] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.026695] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=73] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751026692, replica_locations:[]}) [2024-09-13 13:02:31.026721] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.026768] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=2][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.026781] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.026824] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.026940] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558904041, cache_obj->added_lc()=false, cache_obj->get_object_id()=542, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.028945] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.029210] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.029236] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.029255] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.029270] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.029288] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751029287, replica_locations:[]}) [2024-09-13 13:02:31.029378] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1240911, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.057145] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:31.057179] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=32][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751057136) [2024-09-13 13:02:31.057193] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203750857103, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:31.057220] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.057229] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.057237] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751057204) [2024-09-13 13:02:31.066745] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.067331] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.067370] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.067377] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.067393] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.067419] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751067417, replica_locations:[]}) [2024-09-13 13:02:31.067465] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=42] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.067516] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=2][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.067529] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.067577] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.067676] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558944778, cache_obj->added_lc()=false, cache_obj->get_object_id()=543, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.069515] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.069778] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.069800] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.069808] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.069821] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.069836] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751069835, replica_locations:[]}) [2024-09-13 13:02:31.069934] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=38000, remain_us=1200355, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.084499] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:31.084586] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:31.093466] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=31] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.093710] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.093902] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.094086] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=28] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.094126] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=15] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.094475] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.094483] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=27] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.094908] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.095901] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=11] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.108256] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.108668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.108706] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.108722] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.108741] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.108771] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751108769, replica_locations:[]}) [2024-09-13 13:02:31.108796] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.108842] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.108856] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.108913] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.109048] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6558986153, cache_obj->added_lc()=false, cache_obj->get_object_id()=544, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.111233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.111668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.111703] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.111719] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.111738] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.111762] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751111761, replica_locations:[]}) [2024-09-13 13:02:31.111930] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1158359, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.119211] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=23] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:31.126338] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:138) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7F-0-0] [lt=4][errcode=-4002] invalid argument, empty server list(ret=-4002) [2024-09-13 13:02:31.126357] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:380) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7F-0-0] [lt=18][errcode=-4002] refresh sever list failed(ret=-4002) [2024-09-13 13:02:31.126364] WDIAG [SHARE] runTimerTask (ob_alive_server_tracer.cpp:255) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C7F-0-0] [lt=6][errcode=-4002] refresh alive server list failed(ret=-4002) [2024-09-13 13:02:31.136198] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC80-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.137032] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=6] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:31.137059] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=24][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:31.137068] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:31.137078] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:31.137085] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:31.137092] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:31.137099] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:31.137106] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:31.137111] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:31.137116] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:31.137120] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:31.137126] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=5][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:31.137130] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:31.137135] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:31.137153] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=9][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:31.137160] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.137174] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=11][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.137179] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:31.137186] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server, ret=-5019) [2024-09-13 13:02:31.137192] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:31.137199] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.137215] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:31.137231] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=13][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:31.137237] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:31.137241] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:31.137253] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:31.137262] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C80-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.137270] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19878][ServerGTimer][T0][YB42AC103323-000621F921960C80-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, aret=-5019, ret=-5019) [2024-09-13 13:02:31.137275] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:31.137284] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:31.137289] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:31.137298] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203751136848, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:31.137305] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:31.137383] WDIAG [SHARE] refresh (ob_all_server_tracer.cpp:568) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] fail to get servers_info(ret=-5019, ret="OB_TABLE_NOT_EXIST", GCTX.sql_proxy_=0x55a386ae7408) [2024-09-13 13:02:31.137388] WDIAG [SHARE] runTimerTask (ob_all_server_tracer.cpp:626) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] fail to refresh all server map(ret=-5019) [2024-09-13 13:02:31.151283] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.151676] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.151727] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=49][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.151739] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.151782] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=38] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.151814] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751151811, replica_locations:[]}) [2024-09-13 13:02:31.151843] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.151908] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.151923] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.151969] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.152062] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559029166, cache_obj->added_lc()=false, cache_obj->get_object_id()=545, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.154251] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21F1-0-0] [lt=55][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.154788] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20301][T1_L0_G9][T1][YB42AC103326-00062119ECDB21F5-0-0] [lt=26][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.154780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.155152] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21F6-0-0] [lt=132][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.155197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.155224] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.155234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.155256] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.155270] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751155269, replica_locations:[]}) [2024-09-13 13:02:31.155361] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1114927, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.155625] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21FA-0-0] [lt=31][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.155884] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21FB-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.156344] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB21FF-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.156571] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2200-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.157059] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB5-0-0] [lt=33][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751156698) [2024-09-13 13:02:31.157081] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB5-0-0] [lt=16][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203751156698}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:31.157109] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.157120] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.157126] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751157094) [2024-09-13 13:02:31.157176] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2204-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.157430] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2205-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.157859] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2209-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.167685] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=19] PNIO [ratelimit] time: 1726203751167683, bytes: 3807209, bw: 0.124274 MB/s, add_ts: 1007619, add_bytes: 131304 [2024-09-13 13:02:31.195613] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.196086] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.196116] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.196126] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.196148] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.196169] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751196167, replica_locations:[]}) [2024-09-13 13:02:31.196190] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.196224] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.196238] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.196268] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.196342] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559073451, cache_obj->added_lc()=false, cache_obj->get_object_id()=546, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.197783] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.198052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.198079] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.198089] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.198109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.198125] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751198124, replica_locations:[]}) [2024-09-13 13:02:31.198187] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1072102, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.201078] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.201659] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.202779] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.204682] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.205793] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.206854] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E5-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.208358] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.209398] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.213043] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.214120] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.214917] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=86] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:31.220945] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.222511] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.227457] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=10] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:31.227502] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=22] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:31.227886] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=35] PNIO [ratelimit] time: 1726203751227883, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007745, add_bytes: 0 [2024-09-13 13:02:31.228184] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.229242] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=15] gc stale ls task succ [2024-09-13 13:02:31.229420] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.233841] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=21] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:31.235913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.237134] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.238331] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=3][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:31.238375] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=40][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:31.238384] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:31.238398] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:31.239166] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=0] server is initiating(server_id=0, local_seq=46, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:31.239429] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.239710] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.239738] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.239755] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.239773] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.239794] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751239793, replica_locations:[]}) [2024-09-13 13:02:31.239826] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.239872] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.239899] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.239935] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.239998] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559117112, cache_obj->added_lc()=false, cache_obj->get_object_id()=547, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.240317] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=20] table not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, table_name.ptr()="data_size:16, data:5F5F616C6C5F6D657267655F696E666F", ret=-5019) [2024-09-13 13:02:31.240337] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=18][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-09-13 13:02:31.240349] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=10][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_merge_info, db_name=oceanbase) [2024-09-13 13:02:31.240358] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=8][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_merge_info) [2024-09-13 13:02:31.240368] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=8][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:31.240375] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:31.240381] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_merge_info' doesn't exist [2024-09-13 13:02:31.240388] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:31.240403] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=14][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:31.240407] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:31.240414] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:31.240422] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:31.240426] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:31.240430] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:31.240451] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=16][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:31.240456] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=4][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.240462] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.240466] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=3][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:31.240470] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_merge_info WHERE tenant_id = '1', ret=-5019) [2024-09-13 13:02:31.240478] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:31.240486] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.240503] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=13][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:31.240517] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:31.240526] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=8][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:31.240529] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:31.240541] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:31.240550] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.240558] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C81-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, aret=-5019, ret=-5019) [2024-09-13 13:02:31.240563] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:31.240571] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:31.240578] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:31.240586] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203751240203, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:31.240598] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:31.240604] WDIAG [SHARE] load_global_merge_info (ob_global_merge_table_operator.cpp:49) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, meta_tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:31.240655] WDIAG [STORAGE] refresh_merge_info (ob_tenant_freeze_info_mgr.cpp:890) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] failed to load global merge info(ret=-5019, ret="OB_TABLE_NOT_EXIST", global_merge_info={tenant_id:1, cluster:{name:"cluster", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, frozen_scn:{name:"frozen_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, global_broadcast_scn:{name:"global_broadcast_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, last_merged_scn:{name:"last_merged_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, is_merge_error:{name:"is_merge_error", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_status:{name:"merge_status", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, error_type:{name:"error_type", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, suspend_merging:{name:"suspend_merging", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_start_time:{name:"merge_start_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, last_merged_time:{name:"last_merged_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}}) [2024-09-13 13:02:31.240684] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1005) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=29][errcode=-5019] fail to refresh merge info(tmp_ret=-5019, tmp_ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:31.240699] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=0] server is initiating(server_id=0, local_seq=47, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:31.241505] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=37][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.241746] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.241775] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.241792] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.241808] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.241825] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751241824, replica_locations:[]}) [2024-09-13 13:02:31.241900] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=42000, remain_us=1028389, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.242379] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:31.242556] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.242944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.242961] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.242976] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.242983] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.242994] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751242993, replica_locations:[]}) [2024-09-13 13:02:31.243037] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1997654, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.243141] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.243344] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.243360] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.243370] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.243380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.243394] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751243393, replica_locations:[]}) [2024-09-13 13:02:31.243413] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.243457] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.243469] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.243493] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.243535] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559120653, cache_obj->added_lc()=false, cache_obj->get_object_id()=549, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.244471] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.244763] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.244863] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.244901] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.244911] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.244926] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.244941] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751244941, replica_locations:[]}) [2024-09-13 13:02:31.244992] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=1000, remain_us=1995699, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.245868] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.246190] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.246697] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.246722] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.246731] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.246745] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.246792] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751246791, replica_locations:[]}) [2024-09-13 13:02:31.246815] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.246839] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.246848] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.246866] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.246919] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559124038, cache_obj->added_lc()=false, cache_obj->get_object_id()=550, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.247942] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.248194] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.248221] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.248235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.248246] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.248259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751248259, replica_locations:[]}) [2024-09-13 13:02:31.248298] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=2000, remain_us=1992392, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.250496] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.250750] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.250769] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.250776] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.250783] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.250798] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751250797, replica_locations:[]}) [2024-09-13 13:02:31.250810] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.250826] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.250834] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.250851] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.250888] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559128009, cache_obj->added_lc()=false, cache_obj->get_object_id()=551, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.251705] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.252389] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.252416] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.252426] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.252453] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.252472] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751252471, replica_locations:[]}) [2024-09-13 13:02:31.252523] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=3000, remain_us=1988167, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.254323] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.254542] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C89-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.254898] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=5][errcode=-4018] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:31.254970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.254988] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.255069] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=10][errcode=0] server is initiating(server_id=0, local_seq=48, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:31.255432] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.255746] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.256824] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.256850] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.256865] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751256864, replica_locations:[]}) [2024-09-13 13:02:31.256892] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.256916] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.256929] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.256949] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.256942] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=23] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:31.256973] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=27][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:31.256997] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=22][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:31.257018] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=19][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:31.257033] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=10][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:31.257054] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=21][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:31.257067] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=7][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:31.257082] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=13][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:31.257092] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=9][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:31.257106] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=13][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:31.257115] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=8][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:31.257130] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=14][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:31.257138] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=7][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:31.257153] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=14][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:31.257176] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=11][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:31.257168] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:31.257192] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=13][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.257190] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751257160) [2024-09-13 13:02:31.257205] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=8][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.257203] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203751057203, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:31.257220] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=13][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:31.257232] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.257242] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.257245] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=23][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:31.257249] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751257216) [2024-09-13 13:02:31.257262] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=15][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:31.257264] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB6-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751256770) [2024-09-13 13:02:31.257279] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=16][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.257300] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:31.257247] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559134106, cache_obj->added_lc()=false, cache_obj->get_object_id()=552, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.257292] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB6-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203751256770}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:31.257311] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:31.257307] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=21][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:31.257330] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.257332] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=19][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:31.257341] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:31.257342] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.257348] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=7][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:31.257351] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751257325) [2024-09-13 13:02:31.257371] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=8][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:31.257389] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.257399] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=9][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:31.257415] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=15][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:31.257448] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=31][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:31.257457] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=8][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:31.257466] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=8][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203751256535, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:31.257485] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=18][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:31.257494] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=6][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:31.257622] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=15][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:31.257643] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=19][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:31.257657] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=13][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:31.257667] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=9][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:31.257688] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=15][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:31.257705] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=16][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:31.257727] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C89-0-0] [lt=21][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:31.258353] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.259777] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.259800] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.259813] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751259812, replica_locations:[]}) [2024-09-13 13:02:31.259887] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1980804, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.264107] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.264299] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.264316] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.264343] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751264342, replica_locations:[]}) [2024-09-13 13:02:31.264357] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.264376] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.264390] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.264415] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.264455] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559141575, cache_obj->added_lc()=false, cache_obj->get_object_id()=553, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.264900] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.265349] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.265544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.265559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.265569] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751265568, replica_locations:[]}) [2024-09-13 13:02:31.265606] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1975084, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.265961] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.270766] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.271063] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.271084] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.271096] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751271096, replica_locations:[]}) [2024-09-13 13:02:31.271118] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.271147] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.271159] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.271197] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.271229] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559148349, cache_obj->added_lc()=false, cache_obj->get_object_id()=554, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.272332] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.272578] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.272596] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.272606] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751272605, replica_locations:[]}) [2024-09-13 13:02:31.272649] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=6000, remain_us=1968041, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.276557] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=37][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.277574] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.278829] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.279086] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.279108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.279118] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751279118, replica_locations:[]}) [2024-09-13 13:02:31.279132] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.279152] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.279161] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.279194] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.279225] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559156346, cache_obj->added_lc()=false, cache_obj->get_object_id()=555, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.280135] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.280359] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.280374] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.280384] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751280383, replica_locations:[]}) [2024-09-13 13:02:31.280429] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1960261, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.284094] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.284357] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.284374] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.284384] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751284383, replica_locations:[]}) [2024-09-13 13:02:31.284399] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.284417] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.284432] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.284458] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.284494] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559161612, cache_obj->added_lc()=false, cache_obj->get_object_id()=548, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.284695] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=27] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:31.284804] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:31.285225] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.285690] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.285707] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.285716] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751285715, replica_locations:[]}) [2024-09-13 13:02:31.285754] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=984535, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.287616] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.287858] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.287881] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.287892] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751287891, replica_locations:[]}) [2024-09-13 13:02:31.287902] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.287920] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.287929] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.287942] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.287970] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559165090, cache_obj->added_lc()=false, cache_obj->get_object_id()=556, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.288798] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.288975] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.288993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.289002] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751289002, replica_locations:[]}) [2024-09-13 13:02:31.289044] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1951647, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.289155] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.290233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.297207] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.297557] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.297575] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.297586] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751297585, replica_locations:[]}) [2024-09-13 13:02:31.297600] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.297619] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.297627] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.297653] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.297678] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559174799, cache_obj->added_lc()=false, cache_obj->get_object_id()=558, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.298494] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.298771] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.298789] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.298798] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751298798, replica_locations:[]}) [2024-09-13 13:02:31.298836] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1941854, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.302730] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.304028] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.307998] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.308282] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.308301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.308324] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751308323, replica_locations:[]}) [2024-09-13 13:02:31.308344] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.308364] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.308372] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.308389] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.308550] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559185669, cache_obj->added_lc()=false, cache_obj->get_object_id()=559, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.309615] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.309859] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.309886] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.309898] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751309897, replica_locations:[]}) [2024-09-13 13:02:31.309937] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=10000, remain_us=1930754, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.317508] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.318733] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.320106] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.320339] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.320359] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.320373] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751320372, replica_locations:[]}) [2024-09-13 13:02:31.320387] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.320420] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.320428] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.320456] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.320487] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559197608, cache_obj->added_lc()=false, cache_obj->get_object_id()=560, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.321318] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [20019][Blacklist][T0][Y0-0000000000000000-0-0] [lt=15] blacklist_loop exec finished(cost_time=18, is_enabled=true, send_cnt=0) [2024-09-13 13:02:31.321388] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.321613] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.321630] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.321640] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751321639, replica_locations:[]}) [2024-09-13 13:02:31.321684] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1919007, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.328955] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.329331] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.329348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.329360] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751329360, replica_locations:[]}) [2024-09-13 13:02:31.329387] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.329409] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.329421] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.329462] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.329512] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559206626, cache_obj->added_lc()=false, cache_obj->get_object_id()=557, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.330367] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.330805] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.330825] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.330837] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751330836, replica_locations:[]}) [2024-09-13 13:02:31.330906] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=939383, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.332064] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=38][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:42, tid:19944}, {errcode:-4721, dropped:1689, tid:19944}]) [2024-09-13 13:02:31.332872] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.332898] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=12] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:31.333116] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.333141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.333151] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.333166] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.333179] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751333179, replica_locations:[]}) [2024-09-13 13:02:31.333195] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.333287] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.333213] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:11, local_retry_times:11, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.333832] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=615][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.333851] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.333866] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.333891] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.333897] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:31.333915] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:31.333931] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.333972] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559211091, cache_obj->added_lc()=false, cache_obj->get_object_id()=561, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.334484] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.335019] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.335053] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=33][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.335147] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.335382] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.335406] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.335419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.335433] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.335471] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=31] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751335470, replica_locations:[]}) [2024-09-13 13:02:31.335490] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.335505] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.335539] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=33][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.335557] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:31.335569] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:31.335579] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:31.335605] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:31.335619] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.335629] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.335641] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:31.335651] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:31.335662] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:31.335673] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:31.335686] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:31.335697] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:31.335704] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:31.335713] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:31.335720] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:31.335734] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:31.335745] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:31.335753] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.335760] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:31.335771] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:31.335784] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:31.335794] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=12, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.335815] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] will sleep(sleep_us=12000, remain_us=1904876, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.339120] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.339141] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:31.339165] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:31.339181] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:31.339196] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=5] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:31.339192] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CBF-0-0] [lt=62][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203751339149}) [2024-09-13 13:02:31.339207] WDIAG [STORAGE.TRANS] operator() (ob_ts_mgr.h:175) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] refresh gts failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:31.339216] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:31.348036] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.348318] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.348344] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.348358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.348380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.348397] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751348396, replica_locations:[]}) [2024-09-13 13:02:31.348417] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.348466] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:12, local_retry_times:12, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.348488] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.348500] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.348515] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.348525] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.348534] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:31.348560] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:31.348577] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.348623] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559225739, cache_obj->added_lc()=false, cache_obj->get_object_id()=563, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.349056] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=13] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:31.349964] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.349997] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=32][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.350008] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.350096] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.350309] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.350327] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.350340] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.350353] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.350370] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751350369, replica_locations:[]}) [2024-09-13 13:02:31.350389] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.350412] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.350421] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.350459] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=37][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:31.350471] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:31.350484] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:31.350501] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:31.350515] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.350528] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.350539] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:31.350549] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:31.350565] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:31.350579] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:31.350591] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:31.350602] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:31.350613] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:31.350620] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:31.350631] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:31.350641] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:31.350658] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:31.350668] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.350679] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:31.350689] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:31.350705] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:31.350716] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=13, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.350735] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] will sleep(sleep_us=13000, remain_us=1889956, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.351169] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.357350] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:31.357385] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751357343) [2024-09-13 13:02:31.357395] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203751257216, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:31.357416] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.357425] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.357429] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751357403) [2024-09-13 13:02:31.363566] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4D-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:31.363591] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4D-0-0] [lt=24][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203751363092], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:31.363931] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.364112] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDD-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.364243] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.364277] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.364290] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.364305] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.364323] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751364322, replica_locations:[]}) [2024-09-13 13:02:31.364343] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.364365] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:13, local_retry_times:13, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.364391] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.364396] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.364407] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.364414] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.364417] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:31.364430] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:31.364449] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.364505] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559241624, cache_obj->added_lc()=false, cache_obj->get_object_id()=564, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.364753] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDD-0-0] [lt=12][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203751364410, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035606, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203751364278}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:31.364794] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDD-0-0] [lt=40][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.365495] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.365523] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.365643] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.365891] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.365917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.365924] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.365930] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.365942] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751365942, replica_locations:[]}) [2024-09-13 13:02:31.365953] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.365959] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.365968] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.365979] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:31.365987] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:31.365999] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:31.366012] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:31.366023] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.366028] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.366034] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:31.366038] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:31.366046] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:31.366052] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:31.366060] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:31.366064] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:31.366071] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:31.366080] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:31.366084] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:31.366091] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:31.366101] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:31.366108] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.366116] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:31.366122] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:31.366129] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:31.366134] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=14, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.366150] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] will sleep(sleep_us=14000, remain_us=1874541, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.367683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.368995] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.375073] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.375423] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.375455] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.375461] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.375468] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.375480] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751375479, replica_locations:[]}) [2024-09-13 13:02:31.375495] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.375518] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:44, local_retry_times:44, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.375537] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.375549] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.375579] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.375586] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.375603] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:31.375621] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:31.375635] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.375682] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559252796, cache_obj->added_lc()=false, cache_obj->get_object_id()=562, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.376509] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.376530] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.376603] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.377025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.377047] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.377062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.377074] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.377086] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751377085, replica_locations:[]}) [2024-09-13 13:02:31.377105] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.377116] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.377126] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.377139] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:31.377148] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:31.377157] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:31.377171] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:31.377187] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.377196] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.377206] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:31.377213] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:31.377221] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:31.377231] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:31.377244] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:31.377252] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:31.377260] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:31.377267] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:31.377276] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:31.377284] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:31.377300] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:31.377311] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.377320] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:31.377328] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:31.377338] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:31.377347] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=45, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.377364] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] will sleep(sleep_us=45000, remain_us=892925, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.380367] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.380601] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.380622] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.380640] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.380650] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.380667] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751380666, replica_locations:[]}) [2024-09-13 13:02:31.380685] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.380710] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:14, local_retry_times:14, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.380728] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.380739] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.380753] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.380763] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.380772] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:31.380787] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:31.380804] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.380863] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559257979, cache_obj->added_lc()=false, cache_obj->get_object_id()=565, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.381730] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=28][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.381759] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=28][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.381918] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.382132] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.382153] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.382162] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.382176] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.382191] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751382190, replica_locations:[]}) [2024-09-13 13:02:31.382219] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.382233] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.382244] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.382260] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:31.382271] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:31.382282] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:31.382299] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:31.382312] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.382322] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.382332] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:31.382344] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:31.382353] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:31.382365] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:31.382376] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:31.382386] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:31.382393] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:31.382399] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:31.382409] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:31.382418] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:31.382432] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:31.382457] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.382467] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:31.382480] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:31.382490] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:31.382497] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=15, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.382517] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] will sleep(sleep_us=15000, remain_us=1858174, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.386490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.387831] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.397836] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.398175] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.398207] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.398218] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.398233] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.398257] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751398256, replica_locations:[]}) [2024-09-13 13:02:31.398280] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.398320] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=29][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:15, local_retry_times:15, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.398364] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=38][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.398377] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.398390] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.398397] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.398401] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:31.398424] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:31.398466] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=40][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.398526] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559275640, cache_obj->added_lc()=false, cache_obj->get_object_id()=567, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.399949] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.399986] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.400105] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.400350] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.400372] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.400384] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.400399] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.400416] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751400415, replica_locations:[]}) [2024-09-13 13:02:31.400455] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.400470] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:31.400483] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:31.400499] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:31.400521] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:31.400532] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:31.400549] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:31.400563] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.400575] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:31.400596] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:31.400606] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:31.400611] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:31.400620] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:31.400630] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:31.400636] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:31.400650] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:31.400657] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:31.400667] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:31.400675] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:31.400689] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:31.400701] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:31.400709] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:31.400717] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:31.400728] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:31.400737] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=16, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:31.400760] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] will sleep(sleep_us=16000, remain_us=1839931, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.406372] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.407635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.417048] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.417300] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.417341] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.417360] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.417376] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.417396] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751417395, replica_locations:[]}) [2024-09-13 13:02:31.417419] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.417453] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:16, local_retry_times:16, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:31.417476] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.417489] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.417504] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.417515] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:31.417535] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.417601] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559294714, cache_obj->added_lc()=false, cache_obj->get_object_id()=568, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.418934] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.419291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.419317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.419327] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.419358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.419373] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751419372, replica_locations:[]}) [2024-09-13 13:02:31.419456] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1821235, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.422581] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.423202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.423250] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=46][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.423267] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.423284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.423325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=33] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751423324, replica_locations:[]}) [2024-09-13 13:02:31.423347] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.423390] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:31.423415] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.423428] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.423473] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.423534] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559300647, cache_obj->added_lc()=false, cache_obj->get_object_id()=566, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.424780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.425096] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.425125] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.425138] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.425159] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.425175] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751425174, replica_locations:[]}) [2024-09-13 13:02:31.425243] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=845046, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.427139] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.428267] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.433303] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.434074] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.435405] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.436673] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.436928] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.436950] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.436968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.436977] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.436992] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751436992, replica_locations:[]}) [2024-09-13 13:02:31.437052] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.437007] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.437118] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.437128] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.437167] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.437215] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559314332, cache_obj->added_lc()=false, cache_obj->get_object_id()=569, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.438361] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.438684] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.438824] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.438858] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.438885] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.438896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.438907] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751438906, replica_locations:[]}) [2024-09-13 13:02:31.438960] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1801731, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.440151] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690062-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.441400] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.442575] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.446274] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.447679] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.448769] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.449926] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.452275] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.453432] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.457171] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=43][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.457347] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB7-0-0] [lt=33][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751456877) [2024-09-13 13:02:31.457374] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB7-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203751456877}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:31.457403] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.457418] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.457425] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751457389) [2024-09-13 13:02:31.457458] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.457477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.457484] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.457493] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.457514] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751457512, replica_locations:[]}) [2024-09-13 13:02:31.457535] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.457567] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.457577] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.457597] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.457642] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559334760, cache_obj->added_lc()=false, cache_obj->get_object_id()=571, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.458931] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.458930] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.459111] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.459128] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.459134] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.459144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.459154] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751459153, replica_locations:[]}) [2024-09-13 13:02:31.459205] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1781486, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.460094] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.466573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.467690] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.468626] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=22][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:31.471466] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.471492] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.472022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.472055] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.472079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.472096] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.472115] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751472114, replica_locations:[]}) [2024-09-13 13:02:31.472138] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.472172] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.472187] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.472219] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.472274] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559349388, cache_obj->added_lc()=false, cache_obj->get_object_id()=570, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.472894] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.473514] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.473884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.473916] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.473938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.473956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.473973] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751473973, replica_locations:[]}) [2024-09-13 13:02:31.474034] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=796254, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.475203] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.476547] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.478403] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=197][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.478628] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.478648] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.478656] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.478665] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.478676] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751478675, replica_locations:[]}) [2024-09-13 13:02:31.478692] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.478713] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.478722] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.478744] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.478784] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559355902, cache_obj->added_lc()=false, cache_obj->get_object_id()=572, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.479720] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.479927] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.479943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.479949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.479956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.479964] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751479963, replica_locations:[]}) [2024-09-13 13:02:31.480000] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1760690, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.485112] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:31.485272] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.486547] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.487385] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=40] Cache replace map node details(ret=0, replace_node_count=0, replace_time=2577, replace_start_pos=440398, replace_num=62914) [2024-09-13 13:02:31.487409] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:31.495446] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.496062] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.496773] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.497326] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.500200] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.500492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.500509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.500516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.500526] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.500576] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=41] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751500575, replica_locations:[]}) [2024-09-13 13:02:31.500595] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.500618] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.500627] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.500648] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.500699] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559377816, cache_obj->added_lc()=false, cache_obj->get_object_id()=574, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.501816] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.502147] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.502162] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.502168] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.502177] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.502195] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751502194, replica_locations:[]}) [2024-09-13 13:02:31.502249] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1738442, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.504886] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=23][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:31.507851] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.509673] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.520307] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.521176] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.521259] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.521546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.521549] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.521608] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=57][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.521625] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.521642] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.521662] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751521661, replica_locations:[]}) [2024-09-13 13:02:31.521691] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.521726] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.521741] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.521770] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.521825] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559398938, cache_obj->added_lc()=false, cache_obj->get_object_id()=573, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.522605] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.522903] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.523206] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.523229] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.523239] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.523251] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.523264] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751523264, replica_locations:[]}) [2024-09-13 13:02:31.523314] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=48000, remain_us=746975, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.523420] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.523693] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.523708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.523714] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.523725] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.523735] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751523735, replica_locations:[]}) [2024-09-13 13:02:31.523749] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.523769] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.523788] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.523820] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.523856] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559400975, cache_obj->added_lc()=false, cache_obj->get_object_id()=575, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.524852] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.525208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.525228] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.525234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.525241] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.525250] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751525250, replica_locations:[]}) [2024-09-13 13:02:31.525289] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=22000, remain_us=1715401, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.535182] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.536629] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.546169] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.547317] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.547492] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.548216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.548253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.548264] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.548276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.548293] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751548292, replica_locations:[]}) [2024-09-13 13:02:31.548309] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.548338] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.548348] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.548372] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.548415] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559425532, cache_obj->added_lc()=false, cache_obj->get_object_id()=577, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.549526] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.549718] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.549741] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.549751] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.549763] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.549776] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751549775, replica_locations:[]}) [2024-09-13 13:02:31.549823] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1690868, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.550405] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.551853] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.557465] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:31.557487] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751557456) [2024-09-13 13:02:31.557497] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203751357403, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:31.557524] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.557535] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.557541] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751557504) [2024-09-13 13:02:31.566597] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.567991] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.571498] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.571967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.571986] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.571993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.572001] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.572017] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751572015, replica_locations:[]}) [2024-09-13 13:02:31.572032] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.572053] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.572062] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.572153] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.572204] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559449320, cache_obj->added_lc()=false, cache_obj->get_object_id()=576, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.572826] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.573001] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.573214] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.573242] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.573254] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.573269] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.573286] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751573285, replica_locations:[]}) [2024-09-13 13:02:31.573307] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.573333] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.573343] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.573366] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.573414] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559450532, cache_obj->added_lc()=false, cache_obj->get_object_id()=578, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.573455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.573799] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.573813] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.573818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.573828] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.573839] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751573838, replica_locations:[]}) [2024-09-13 13:02:31.573895] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=696393, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.574376] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.574496] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.574664] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.574698] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.574710] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.574721] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.574738] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751574738, replica_locations:[]}) [2024-09-13 13:02:31.574794] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1665897, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.583776] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.585370] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.599045] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.599331] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.599357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.599369] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.599382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.599400] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751599399, replica_locations:[]}) [2024-09-13 13:02:31.599419] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.599466] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.599478] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.599503] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.599551] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559476669, cache_obj->added_lc()=false, cache_obj->get_object_id()=580, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.600722] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=83][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.600961] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.600982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.600992] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.600996] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.601004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.601020] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751601019, replica_locations:[]}) [2024-09-13 13:02:31.601073] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1639617, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.602172] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.602274] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=44][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.603741] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.621501] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.622991] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.623157] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.623810] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.623831] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.623837] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.623846] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.623886] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751623885, replica_locations:[]}) [2024-09-13 13:02:31.623903] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.623892] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=42] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:31.623927] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.623934] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.623958] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.624005] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559501121, cache_obj->added_lc()=false, cache_obj->get_object_id()=579, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.625046] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.625380] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.625400] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.625406] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.625413] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.625424] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751625423, replica_locations:[]}) [2024-09-13 13:02:31.625482] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=644806, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.626391] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.626499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.626520] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.626530] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.626548] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.626562] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751626562, replica_locations:[]}) [2024-09-13 13:02:31.626577] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.626599] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.626609] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.626634] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.626684] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559503799, cache_obj->added_lc()=false, cache_obj->get_object_id()=581, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.627839] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.628063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.628091] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.628104] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.628119] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.628132] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751628131, replica_locations:[]}) [2024-09-13 13:02:31.628177] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1612513, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.629763] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.631112] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.641697] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.643171] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.654413] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.654737] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.654768] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.654781] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.654794] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.654815] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751654814, replica_locations:[]}) [2024-09-13 13:02:31.654836] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.654863] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.654895] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=30][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.654921] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.654970] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559532088, cache_obj->added_lc()=false, cache_obj->get_object_id()=583, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.656150] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=50][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.656404] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.656426] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.656446] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.656460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.656480] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751656479, replica_locations:[]}) [2024-09-13 13:02:31.656537] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1584154, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.657511] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.657534] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.657543] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751657494) [2024-09-13 13:02:31.657581] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB8-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751657018) [2024-09-13 13:02:31.657629] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB8-0-0] [lt=41][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203751657018}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:31.657642] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:31.657659] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751657636) [2024-09-13 13:02:31.657671] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203751557503, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:31.657688] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.657695] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.657700] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751657682) [2024-09-13 13:02:31.659660] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.661066] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.662724] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.664275] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.675692] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.676239] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.676265] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.676275] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.676286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.676318] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751676317, replica_locations:[]}) [2024-09-13 13:02:31.676338] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.676371] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.676378] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.676402] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.676458] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559553574, cache_obj->added_lc()=false, cache_obj->get_object_id()=582, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.677513] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.677709] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.677728] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.677734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.677741] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.677751] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751677750, replica_locations:[]}) [2024-09-13 13:02:31.677801] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=592488, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.683722] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.683983] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.684011] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.684028] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.684040] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.684055] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751684054, replica_locations:[]}) [2024-09-13 13:02:31.684070] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.684092] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.684102] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.684121] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.684161] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559561280, cache_obj->added_lc()=false, cache_obj->get_object_id()=584, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.684812] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.685124] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.685324] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.685347] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.685357] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.685380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.685397] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751685396, replica_locations:[]}) [2024-09-13 13:02:31.685408] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:31.685478] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1555213, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.686153] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.687488] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=8] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:31.690561] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.691722] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.707683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.709084] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.713693] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.714023] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.714052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.714064] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.714076] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.714096] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751714095, replica_locations:[]}) [2024-09-13 13:02:31.714116] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.714139] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.714150] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.714174] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.714213] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559591332, cache_obj->added_lc()=false, cache_obj->get_object_id()=586, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.715326] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.715523] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.715544] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.715555] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.715567] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.715580] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751715579, replica_locations:[]}) [2024-09-13 13:02:31.715644] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1525046, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.722235] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.723793] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.727509] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:31.727547] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:31.729022] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.729300] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.729325] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.729334] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.729342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.729358] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751729357, replica_locations:[]}) [2024-09-13 13:02:31.729371] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.729391] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.729400] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.729425] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.729488] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559606605, cache_obj->added_lc()=false, cache_obj->get_object_id()=585, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.730388] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.730572] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.730591] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.730597] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.730604] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.730616] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751730615, replica_locations:[]}) [2024-09-13 13:02:31.730660] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=539628, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.731617] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.733114] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.744858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.745186] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.745208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.745220] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.745240] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.745256] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751745255, replica_locations:[]}) [2024-09-13 13:02:31.745272] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.745294] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.745304] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.745327] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.745386] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559622504, cache_obj->added_lc()=false, cache_obj->get_object_id()=587, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.746495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.746807] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.746833] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.746844] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.746855] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.746869] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751746868, replica_locations:[]}) [2024-09-13 13:02:31.746954] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=30000, remain_us=1493737, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.755455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.756659] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.756859] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.757569] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB9-0-0] [lt=31][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751757121) [2024-09-13 13:02:31.757615] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AB9-0-0] [lt=41][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203751757121}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:31.757650] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.757667] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.757673] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751757629) [2024-09-13 13:02:31.758106] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.777175] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.777829] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.777857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.777897] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=38] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.777914] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.777931] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751777930, replica_locations:[]}) [2024-09-13 13:02:31.777948] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.777973] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.777984] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.778012] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.778068] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559655184, cache_obj->added_lc()=false, cache_obj->get_object_id()=589, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.779237] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.779554] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:31.779585] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.779600] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.779619] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.779639] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.779658] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751779657, replica_locations:[]}) [2024-09-13 13:02:31.779732] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1460958, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.782608] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.782870] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.783040] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.783065] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.783072] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.783079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.783088] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751783087, replica_locations:[]}) [2024-09-13 13:02:31.783104] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.783128] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.783141] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.783176] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.783215] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559660332, cache_obj->added_lc()=false, cache_obj->get_object_id()=588, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.784059] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.784083] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.784243] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.784265] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.784271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.784281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.784291] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751784291, replica_locations:[]}) [2024-09-13 13:02:31.784333] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=485955, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.789385] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.790894] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.809608] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.810965] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.811077] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.811319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.811349] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.811360] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.811372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.811388] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751811387, replica_locations:[]}) [2024-09-13 13:02:31.811405] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.811433] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.811452] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.811474] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.811517] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559688635, cache_obj->added_lc()=false, cache_obj->get_object_id()=590, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.812721] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.813102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.813141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.813158] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.813198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=38] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.813219] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751813218, replica_locations:[]}) [2024-09-13 13:02:31.813284] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1427406, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.824579] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.826087] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.837541] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.837603] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.837933] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.837954] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.837974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.837985] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.837998] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751837997, replica_locations:[]}) [2024-09-13 13:02:31.838012] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.838035] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.838051] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.838073] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.838126] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559715240, cache_obj->added_lc()=false, cache_obj->get_object_id()=591, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.839286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.839342] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.839568] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.839592] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.839605] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:31.839610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.839625] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.839640] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751839639, replica_locations:[]}) [2024-09-13 13:02:31.839645] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:31.839704] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=430585, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.845509] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.845970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.845999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.846010] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.846022] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.846036] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751846036, replica_locations:[]}) [2024-09-13 13:02:31.846052] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.846074] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.846085] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.846118] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.846330] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559723444, cache_obj->added_lc()=false, cache_obj->get_object_id()=592, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.847555] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.847945] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.847971] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.847982] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.847999] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.848013] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751848012, replica_locations:[]}) [2024-09-13 13:02:31.848061] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1392630, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.857693] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:31.857721] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:31.857748] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751857687) [2024-09-13 13:02:31.857758] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203751657680, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:31.857777] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.857786] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.857792] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751857765) [2024-09-13 13:02:31.860701] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.862089] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.864083] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4E-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:31.864103] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4E-0-0] [lt=19][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203751863565], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:31.864650] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDE-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.865360] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDE-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:31.866796] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.868179] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.872000] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=20] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.873207] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=35] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.873530] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=6] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:31.881301] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.881779] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.881806] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.881817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.881830] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.881856] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751881855, replica_locations:[]}) [2024-09-13 13:02:31.881884] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.881921] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.881931] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.881956] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.882005] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559759124, cache_obj->added_lc()=false, cache_obj->get_object_id()=594, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.883163] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.883543] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.883570] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.883620] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=49] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.883638] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.883659] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751883658, replica_locations:[]}) [2024-09-13 13:02:31.883713] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1356978, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.885731] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:31.887575] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:31.891155] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5396, clean_start_pos=880803, clean_num=125829) [2024-09-13 13:02:31.893919] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.894213] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.894239] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.894248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.894268] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.894282] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751894281, replica_locations:[]}) [2024-09-13 13:02:31.894298] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.894326] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.894337] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.894368] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.894419] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559771534, cache_obj->added_lc()=false, cache_obj->get_object_id()=593, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.895533] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.895762] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.895787] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.895795] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.895823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.895834] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751895833, replica_locations:[]}) [2024-09-13 13:02:31.895902] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=374387, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.896740] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.897729] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.898236] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.899326] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.917916] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.918403] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.918424] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.918458] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.918470] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.918487] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751918486, replica_locations:[]}) [2024-09-13 13:02:31.918512] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.918537] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.918548] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.918583] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.918630] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559795748, cache_obj->added_lc()=false, cache_obj->get_object_id()=595, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.919750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.920149] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.920170] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.920180] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.920192] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.920205] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751920204, replica_locations:[]}) [2024-09-13 13:02:31.920257] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1320433, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.927764] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.929135] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.935981] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.937261] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.951102] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.951421] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.951472] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=50][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.951484] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.951496] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.951511] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751951510, replica_locations:[]}) [2024-09-13 13:02:31.951527] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.951552] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.951571] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.951595] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.951640] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559828757, cache_obj->added_lc()=false, cache_obj->get_object_id()=596, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.952627] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.952863] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.952903] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.952920] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.952937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.952955] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751952954, replica_locations:[]}) [2024-09-13 13:02:31.953021] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=56000, remain_us=317268, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:31.955423] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.955856] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.955869] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.955890] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.955900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.955910] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751955910, replica_locations:[]}) [2024-09-13 13:02:31.955924] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.955942] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.955948] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.955966] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.956006] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559833124, cache_obj->added_lc()=false, cache_obj->get_object_id()=597, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.956927] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.957274] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.957293] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.957304] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.957314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.957323] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751957322, replica_locations:[]}) [2024-09-13 13:02:31.957366] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1283325, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:31.957691] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABA-0-0] [lt=34][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203751957254) [2024-09-13 13:02:31.957724] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABA-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203751957254}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:31.957748] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.957760] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:31.957768] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203751957737) [2024-09-13 13:02:31.959704] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.961320] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.974924] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.976293] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.992929] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.993541] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.993946] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.993963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.993969] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.993978] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.993992] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751993991, replica_locations:[]}) [2024-09-13 13:02:31.994013] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:31.994036] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:31.994045] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:31.994082] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:31.994126] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559871243, cache_obj->added_lc()=false, cache_obj->get_object_id()=599, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:31.994465] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.995301] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:31.995626] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.995641] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:31.995647] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:31.995655] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:31.995665] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203751995664, replica_locations:[]}) [2024-09-13 13:02:31.995716] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1244974, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.009199] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.009488] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.009515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.009527] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.009539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.009554] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752009553, replica_locations:[]}) [2024-09-13 13:02:32.009576] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.009606] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.009629] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=21][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.009659] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.009716] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559886832, cache_obj->added_lc()=false, cache_obj->get_object_id()=598, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.010683] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.010933] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.010967] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.010979] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.010990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.011003] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752011002, replica_locations:[]}) [2024-09-13 13:02:32.011065] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=259223, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:32.014966] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.016458] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.020357] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=30][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.027052] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.028642] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.032866] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.033303] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.033319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.033367] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=47] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.033379] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.033393] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752033392, replica_locations:[]}) [2024-09-13 13:02:32.033405] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.033426] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.033458] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=31][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.033481] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.033523] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559910641, cache_obj->added_lc()=false, cache_obj->get_object_id()=600, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.034605] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.034959] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.034976] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.034981] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.034991] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.035004] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752035004, replica_locations:[]}) [2024-09-13 13:02:32.035054] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1205636, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.056121] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.057801] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.057824] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752057794) [2024-09-13 13:02:32.057837] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203751857765, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.057866] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.057889] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.057898] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752057846) [2024-09-13 13:02:32.058124] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.062208] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.064044] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.068260] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.068557] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.068582] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.068591] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.068602] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.068616] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752068615, replica_locations:[]}) [2024-09-13 13:02:32.068636] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.068665] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.068678] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.068706] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.068776] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559945890, cache_obj->added_lc()=false, cache_obj->get_object_id()=601, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.069847] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.070112] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.070133] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.070142] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.070153] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.070166] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752070165, replica_locations:[]}) [2024-09-13 13:02:32.070229] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1] will sleep(sleep_us=58000, remain_us=200060, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:32.073277] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.074081] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.074105] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.074116] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.074133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.074147] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752074147, replica_locations:[]}) [2024-09-13 13:02:32.074162] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.074185] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.074199] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.074223] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.074263] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559951381, cache_obj->added_lc()=false, cache_obj->get_object_id()=602, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.075342] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.075828] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.075848] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.075859] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.075884] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.075901] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752075900, replica_locations:[]}) [2024-09-13 13:02:32.075948] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1164743, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.085238] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=33][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.087677] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:32.091515] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:32.093103] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.093434] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=20] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.093625] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=24] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.094353] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.094722] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=18] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.094742] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.094744] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.094900] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.095813] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=11] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.098870] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.098898] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.100318] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.100468] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.115144] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.115619] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.115646] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.115657] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.115670] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.115687] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752115686, replica_locations:[]}) [2024-09-13 13:02:32.115704] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.115967] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.115998] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=29][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.116034] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.116083] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6559993201, cache_obj->added_lc()=false, cache_obj->get_object_id()=604, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.117264] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.117586] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.117606] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.117612] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.117620] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.117630] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752117630, replica_locations:[]}) [2024-09-13 13:02:32.117682] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=40000, remain_us=1123009, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.119297] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=21] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:32.128464] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.128784] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.128805] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.128815] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.128833] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.128847] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752128846, replica_locations:[]}) [2024-09-13 13:02:32.128866] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.128931] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.128943] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.128965] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.129028] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=27][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560006143, cache_obj->added_lc()=false, cache_obj->get_object_id()=603, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.130167] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.130399] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.130418] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.130428] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.130448] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.130461] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752130461, replica_locations:[]}) [2024-09-13 13:02:32.130513] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=139776, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:32.136025] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.136713] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC81-0-0] [lt=22][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:2892016032, pcode_:1193, hlen_:184, priority_:3, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203752136371, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035625, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203752049227}, chid_:0, clen_:30, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:32.136741] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC81-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:32.138015] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.141858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.143556] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.149775] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=32] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:32.151321] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=16][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.157774] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABB-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752157364) [2024-09-13 13:02:32.157805] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABB-0-0] [lt=29][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203752157364}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:32.157834] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.157848] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.157854] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752157816) [2024-09-13 13:02:32.157868] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.158282] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.158302] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.158311] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.158325] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.158344] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752158343, replica_locations:[]}) [2024-09-13 13:02:32.158358] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.158423] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.158454] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=29][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.158509] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.158554] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560035672, cache_obj->added_lc()=false, cache_obj->get_object_id()=605, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.159907] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.160293] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.160313] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.160324] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.160338] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.160354] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752160353, replica_locations:[]}) [2024-09-13 13:02:32.160412] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1080279, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.174629] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=22] PNIO [ratelimit] time: 1726203752174628, bytes: 4005531, bw: 0.187830 MB/s, add_ts: 1006945, add_bytes: 198322 [2024-09-13 13:02:32.174649] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.176310] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.186307] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.187919] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.189702] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.189924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.189947] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.189958] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.189970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.189983] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752189982, replica_locations:[]}) [2024-09-13 13:02:32.190004] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.190026] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.190036] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.190071] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.190114] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560067231, cache_obj->added_lc()=false, cache_obj->get_object_id()=606, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.191061] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.191235] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.191255] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.191271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.191287] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.191304] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752191303, replica_locations:[]}) [2024-09-13 13:02:32.191370] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=78919, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:32.191487] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=71] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:32.201582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.201941] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.201956] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.201962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.201972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.201983] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752201982, replica_locations:[]}) [2024-09-13 13:02:32.201996] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.202015] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.202023] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.202040] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.202079] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560079196, cache_obj->added_lc()=false, cache_obj->get_object_id()=607, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.203044] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.203317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.203330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.203336] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.203346] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.203355] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752203355, replica_locations:[]}) [2024-09-13 13:02:32.203397] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1037294, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.208508] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E6-0-0] [lt=17][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:32.215837] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=52] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:32.227591] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=13] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:32.227631] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=22] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:32.229308] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=27] gc stale ls task succ [2024-09-13 13:02:32.229711] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=14] ====== check clog disk timer task ====== [2024-09-13 13:02:32.229733] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=20] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:32.229746] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:32.233986] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=65] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:32.235368] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=17] PNIO [ratelimit] time: 1726203752235367, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007484, add_bytes: 0 [2024-09-13 13:02:32.238534] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:32.238555] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:32.238562] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:32.238569] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:32.245840] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.245857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.245864] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.245895] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.245915] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752245914, replica_locations:[]}) [2024-09-13 13:02:32.245935] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.245955] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:32.245971] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.245979] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.246000] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.246043] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560123161, cache_obj->added_lc()=false, cache_obj->get_object_id()=609, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.247936] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.247952] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.247963] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.247971] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.247981] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752247981, replica_locations:[]}) [2024-09-13 13:02:32.248028] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=992663, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.251822] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.251849] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.251859] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.251871] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.251899] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752251899, replica_locations:[]}) [2024-09-13 13:02:32.251914] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.251939] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.251950] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.251969] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.252008] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560129125, cache_obj->added_lc()=false, cache_obj->get_object_id()=608, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.252996] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.253022] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.253037] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.253048] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.253063] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752253062, replica_locations:[]}) [2024-09-13 13:02:32.253110] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0] will sleep(sleep_us=17178, remain_us=17178, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203752270288) [2024-09-13 13:02:32.257818] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:32.257900] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.257917] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752257894) [2024-09-13 13:02:32.257925] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752057845, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.257942] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.257948] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.257952] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752257932) [2024-09-13 13:02:32.258288] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.258308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.258314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.258322] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.258373] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=17][errcode=0] server is initiating(server_id=0, local_seq=49, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:32.259381] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=14] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:32.259406] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=22][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:32.259414] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:32.259420] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:32.259428] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:32.259463] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=34][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:32.259469] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:32.259476] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:32.259480] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:32.259484] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:32.259489] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:32.259493] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=5][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:32.259498] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:32.259505] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:32.259514] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=6][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:32.259519] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=5][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.259525] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.259532] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:32.259539] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:32.259544] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:32.259552] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.259563] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=8][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:32.259577] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:32.259581] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=3][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:32.259585] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:32.259600] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:32.259609] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.259613] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:32.259621] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:32.259629] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:32.259634] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:32.259639] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203752259261, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:32.259649] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:32.259654] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:32.259707] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:32.259718] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=10][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:32.259723] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:32.259729] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=5][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:32.259738] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:32.259746] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=7][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:32.259756] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8A-0-0] [lt=9][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:32.270371] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=16][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203752270288, ctx_timeout_ts=1726203752270288, worker_timeout_ts=1726203752270288, default_timeout=1000000) [2024-09-13 13:02:32.270398] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=26][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:32.270409] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:32.270426] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.270458] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=30][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:32.270483] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.270493] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.270517] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.270566] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560147681, cache_obj->added_lc()=false, cache_obj->get_object_id()=611, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.271218] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203752270288, ctx_timeout_ts=1726203752270288, worker_timeout_ts=1726203752270288, default_timeout=1000000) [2024-09-13 13:02:32.271253] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=34][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:32.271264] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=10][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:32.271276] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.271285] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.271299] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:32.271326] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:32.271343] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.271352] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.271379] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:32.271393] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:32.271406] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:32.271420] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.271429] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=8] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000318) [2024-09-13 13:02:32.271448] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=19][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:32.271458] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=8][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.271468] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:32.271476] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:32.271484] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=7][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:32.271497] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.271528] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560148649, cache_obj->added_lc()=false, cache_obj->get_object_id()=612, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.271576] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:32.271586] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:32.271598] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:32.271610] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=11][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:32.271626] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=14][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:32.271640] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:32.271654] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=13] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001368) [2024-09-13 13:02:32.271664] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=9][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:32.271673] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=8] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001396) [2024-09-13 13:02:32.271686] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=12][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:32.271698] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=11] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:32.271712] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C82-0-0] [lt=13][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:32.271726] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:32.271737] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:32.271775] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=15] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:32.271790] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=13] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:32.273666] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.273693] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.273704] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.273715] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.273727] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752273726, replica_locations:[]}) [2024-09-13 13:02:32.273767] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1998036, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.274129] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.274151] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.274163] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752274163, replica_locations:[]}) [2024-09-13 13:02:32.274177] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.274196] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.274210] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.274232] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.274259] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560151379, cache_obj->added_lc()=false, cache_obj->get_object_id()=613, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.275124] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.275149] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.275166] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752275166, replica_locations:[]}) [2024-09-13 13:02:32.275205] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1996598, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.276126] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:32.276144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.276152] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.276159] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.276172] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:32.276183] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:32.276193] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:32.276429] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.276451] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.276460] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752276460, replica_locations:[]}) [2024-09-13 13:02:32.276488] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:32.276504] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.276519] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.276531] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752276531, replica_locations:[]}) [2024-09-13 13:02:32.276545] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.276563] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.276572] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.276588] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.276615] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560153736, cache_obj->added_lc()=false, cache_obj->get_object_id()=614, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.276711] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:32.276723] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] [2024-09-13 13:02:32.277011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277023] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.277029] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277040] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:32.277050] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:32.277058] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:32.277249] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277259] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.277264] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277273] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:32.277285] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:32.277289] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:32.277518] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277527] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.277532] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277540] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:32.277547] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:32.277550] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:32.277555] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:32.277564] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:32.277571] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:32.277873] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.277905] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.277918] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752277917, replica_locations:[]}) [2024-09-13 13:02:32.277955] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993849, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.280265] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.280286] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.280298] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752280298, replica_locations:[]}) [2024-09-13 13:02:32.280318] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.280344] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.280364] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.280384] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.280427] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560157548, cache_obj->added_lc()=false, cache_obj->get_object_id()=615, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.281256] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.281280] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.281292] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752281292, replica_locations:[]}) [2024-09-13 13:02:32.281328] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1990475, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.284677] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.284700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.284713] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752284712, replica_locations:[]}) [2024-09-13 13:02:32.284732] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.284750] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.284759] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.284779] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.284805] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560161926, cache_obj->added_lc()=false, cache_obj->get_object_id()=616, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.285605] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.285630] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.285642] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752285641, replica_locations:[]}) [2024-09-13 13:02:32.285681] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1986122, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.287738] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:32.290056] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.290085] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=28] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.290098] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752290097, replica_locations:[]}) [2024-09-13 13:02:32.290111] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.290130] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.290139] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.290162] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.290189] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560167310, cache_obj->added_lc()=false, cache_obj->get_object_id()=617, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.291150] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.291176] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.291189] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752291188, replica_locations:[]}) [2024-09-13 13:02:32.291232] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1980572, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.291333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.291353] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.291370] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752291369, replica_locations:[]}) [2024-09-13 13:02:32.291390] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.291413] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.291425] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.291469] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.291506] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560168625, cache_obj->added_lc()=false, cache_obj->get_object_id()=610, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.291837] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:32.292520] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.292541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.292557] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752292556, replica_locations:[]}) [2024-09-13 13:02:32.292606] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=948085, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.296542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.296566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.296579] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752296579, replica_locations:[]}) [2024-09-13 13:02:32.296594] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.296612] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.296622] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.296645] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.296672] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560173793, cache_obj->added_lc()=false, cache_obj->get_object_id()=618, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.297468] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.297492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.297508] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752297508, replica_locations:[]}) [2024-09-13 13:02:32.297568] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1974235, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.303939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.303970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.303987] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752303986, replica_locations:[]}) [2024-09-13 13:02:32.304002] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.304020] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.304030] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.304057] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.304084] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560181205, cache_obj->added_lc()=false, cache_obj->get_object_id()=620, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.304917] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.304941] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.304954] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752304953, replica_locations:[]}) [2024-09-13 13:02:32.304990] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1966813, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.312314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.312338] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.312351] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752312351, replica_locations:[]}) [2024-09-13 13:02:32.312370] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.312389] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.312399] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.312416] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.312462] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560189583, cache_obj->added_lc()=false, cache_obj->get_object_id()=621, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.313381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.313406] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.313418] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752313417, replica_locations:[]}) [2024-09-13 13:02:32.313464] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1958339, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.321868] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.321945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=75] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.321961] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752321960, replica_locations:[]}) [2024-09-13 13:02:32.321976] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.321995] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.322013] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.322032] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.322060] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560199180, cache_obj->added_lc()=false, cache_obj->get_object_id()=622, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.322909] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.322933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.322946] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752322945, replica_locations:[]}) [2024-09-13 13:02:32.322988] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1948816, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.332404] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.332433] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.332457] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752332457, replica_locations:[]}) [2024-09-13 13:02:32.332473] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.332493] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.332503] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.332527] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.332559] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560209679, cache_obj->added_lc()=false, cache_obj->get_object_id()=623, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.333209] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=22][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:52, tid:20197}]) [2024-09-13 13:02:32.333565] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.333591] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.333602] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.333613] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.333625] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752333624, replica_locations:[]}) [2024-09-13 13:02:32.333664] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=10000, remain_us=1938139, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.336969] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.336985] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.336991] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.336999] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.337021] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752337020, replica_locations:[]}) [2024-09-13 13:02:32.337034] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.337088] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.337097] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.337113] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.337147] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560214265, cache_obj->added_lc()=false, cache_obj->get_object_id()=619, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.338066] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.338089] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.338099] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.338113] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.338154] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752338153, replica_locations:[]}) [2024-09-13 13:02:32.338194] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=902497, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.340089] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:32.340113] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:32.340108] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CC5-0-0] [lt=17][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203752340065}) [2024-09-13 13:02:32.344052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.344075] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.344085] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.344096] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.344108] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752344108, replica_locations:[]}) [2024-09-13 13:02:32.344127] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.344145] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.344154] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.344171] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.344200] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560221321, cache_obj->added_lc()=false, cache_obj->get_object_id()=624, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.345107] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.345133] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.345143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.345160] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.345172] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752345171, replica_locations:[]}) [2024-09-13 13:02:32.345213] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1926590, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.349124] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=23] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:32.356633] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.356661] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.356673] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.356697] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.356711] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752356710, replica_locations:[]}) [2024-09-13 13:02:32.356725] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.356745] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.356755] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.356778] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.356818] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560233938, cache_obj->added_lc()=false, cache_obj->get_object_id()=626, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.357769] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.357795] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.357806] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.357817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.357829] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752357828, replica_locations:[]}) [2024-09-13 13:02:32.357870] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1913934, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.357945] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABC-0-0] [lt=30][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752357512) [2024-09-13 13:02:32.357962] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.357967] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABC-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203752357512}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:32.357987] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752357956) [2024-09-13 13:02:32.357995] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752257932, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.358004] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:32.358058] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.358064] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.358069] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752358013) [2024-09-13 13:02:32.358077] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.358081] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.358084] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752358075) [2024-09-13 13:02:32.364504] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4F-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:32.364522] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B4F-0-0] [lt=17][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203752364095], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:32.364559] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=16] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.364954] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDF-0-0] [lt=1][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:32.365538] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DDF-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:32.366989] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=44][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.370305] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=42][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.370337] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.370354] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.370370] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.370388] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752370388, replica_locations:[]}) [2024-09-13 13:02:32.370411] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.370455] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.370476] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.370502] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.370544] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560247661, cache_obj->added_lc()=false, cache_obj->get_object_id()=627, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.371732] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.371770] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.371787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.371803] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.371820] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752371819, replica_locations:[]}) [2024-09-13 13:02:32.371873] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1899931, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.382936] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=27][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.383675] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.383698] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.383708] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.383723] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.383740] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752383739, replica_locations:[]}) [2024-09-13 13:02:32.383756] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.383785] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.383797] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.383831] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.383899] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560261014, cache_obj->added_lc()=false, cache_obj->get_object_id()=625, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.385361] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.385392] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.385404] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.385454] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.385476] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752385475, replica_locations:[]}) [2024-09-13 13:02:32.385478] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.385492] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.385505] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.385518] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.385529] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752385528, replica_locations:[]}) [2024-09-13 13:02:32.385500] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.385571] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.385585] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.385587] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=855104, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.385622] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.385671] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560262788, cache_obj->added_lc()=false, cache_obj->get_object_id()=628, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.386674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.386707] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.386722] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.386739] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.386757] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752386756, replica_locations:[]}) [2024-09-13 13:02:32.386802] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1885001, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.390711] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=29] ====== tenant freeze timer task ====== [2024-09-13 13:02:32.390739] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:32.401338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.401374] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.401386] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.401399] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.401415] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752401414, replica_locations:[]}) [2024-09-13 13:02:32.401456] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=39] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.401480] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.401491] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.401514] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.401562] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560278678, cache_obj->added_lc()=false, cache_obj->get_object_id()=630, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.402852] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.402892] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.402910] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.402922] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.402935] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752402934, replica_locations:[]}) [2024-09-13 13:02:32.402988] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1868816, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.418505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.418534] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.418545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.418556] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.418570] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752418570, replica_locations:[]}) [2024-09-13 13:02:32.418585] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.418609] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.418620] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.418649] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.418697] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560295814, cache_obj->added_lc()=false, cache_obj->get_object_id()=631, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.419896] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.419925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.419941] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.419957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.419976] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752419975, replica_locations:[]}) [2024-09-13 13:02:32.420036] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1851768, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.433225] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.433249] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.433262] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.433270] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.433285] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752433284, replica_locations:[]}) [2024-09-13 13:02:32.433299] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.433318] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:46, local_retry_times:46, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.433335] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.433344] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.433355] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.433356] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=18][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:2051, tid:20197}]) [2024-09-13 13:02:32.433378] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.433385] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.433401] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.433415] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.433487] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560310604, cache_obj->added_lc()=false, cache_obj->get_object_id()=629, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.434495] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.434516] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.434901] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.434917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.434923] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.434929] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.434938] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752434937, replica_locations:[]}) [2024-09-13 13:02:32.434956] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.434965] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.434974] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.434985] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:32.434994] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:32.435001] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:32.435014] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:32.435024] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.435030] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.435039] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:32.435046] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:32.435051] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:32.435056] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.435064] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:32.435070] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:32.435077] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:32.435081] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:32.435088] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:32.435095] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:32.435105] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:32.435113] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.435121] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:32.435131] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:32.435141] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:32.435148] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=47, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.435164] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] will sleep(sleep_us=47000, remain_us=805526, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.436490] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.436526] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.436537] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.436554] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.436567] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752436566, replica_locations:[]}) [2024-09-13 13:02:32.436582] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.436604] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:16, local_retry_times:16, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.436620] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.436629] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.436639] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.436648] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.436655] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.436698] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=36][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.436718] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.436756] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560313874, cache_obj->added_lc()=false, cache_obj->get_object_id()=632, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.437410] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.437456] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=45][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.437867] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.437899] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.437909] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.437920] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.437932] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752437931, replica_locations:[]}) [2024-09-13 13:02:32.437946] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.437960] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.437970] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.437986] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:32.437996] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:32.438004] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:32.438018] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:32.438028] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.438037] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.438046] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:32.438054] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:32.438064] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:32.438073] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.438082] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:32.438090] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:32.438098] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:32.438105] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:32.438113] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:32.438121] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:32.438135] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:32.438145] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.438154] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:32.438161] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:32.438174] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:32.438182] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=17, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.438198] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] will sleep(sleep_us=17000, remain_us=1833606, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.455791] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.455821] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.455832] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.455844] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.455857] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752455856, replica_locations:[]}) [2024-09-13 13:02:32.455872] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.455900] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:17, local_retry_times:17, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.455924] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.455934] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.455945] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.455953] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.455961] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.455978] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.455989] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.456026] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560333144, cache_obj->added_lc()=false, cache_obj->get_object_id()=634, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.456761] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.456794] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=32][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.457228] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.457257] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.457268] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.457279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.457296] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752457296, replica_locations:[]}) [2024-09-13 13:02:32.457310] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.457322] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.457332] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.457345] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:32.457367] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:32.457383] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:32.457401] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:32.457416] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.457425] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.457434] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:32.457459] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:32.457467] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:32.457480] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.457490] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:32.457499] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:32.457506] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:32.457514] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:32.457525] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:32.457533] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:32.457547] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:32.457557] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.457565] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:32.457573] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:32.457582] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:32.457591] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=18, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.457609] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] will sleep(sleep_us=18000, remain_us=1814194, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.458010] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABD-0-0] [lt=24][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752457580) [2024-09-13 13:02:32.458066] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.458081] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752458060) [2024-09-13 13:02:32.458089] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752358001, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.458036] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABD-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203752457580}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:32.458107] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.458113] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.458120] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752458096) [2024-09-13 13:02:32.476060] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.476078] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.476085] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.476093] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.476105] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752476104, replica_locations:[]}) [2024-09-13 13:02:32.476118] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.476133] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:18, local_retry_times:18, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.476149] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.476165] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.476172] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.476180] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.476184] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.476197] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.476207] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.476245] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560353362, cache_obj->added_lc()=false, cache_obj->get_object_id()=635, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.476997] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.477018] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.477368] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.477388] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.477394] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.477401] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.477412] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752477412, replica_locations:[]}) [2024-09-13 13:02:32.477422] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.477429] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.477448] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.477459] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:32.477478] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:32.477486] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:32.477502] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:32.477512] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.477517] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.477522] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:32.477530] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:32.477534] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:32.477542] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.477550] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:32.477555] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:32.477561] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:32.477565] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:32.477573] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:32.477580] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:32.477590] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:32.477596] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.477604] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:32.477608] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:32.477616] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:32.477620] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=19, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.477636] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] will sleep(sleep_us=19000, remain_us=1794167, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.483025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.483042] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.483048] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.483055] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.483067] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752483067, replica_locations:[]}) [2024-09-13 13:02:32.483086] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.483099] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:47, local_retry_times:47, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.483114] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.483122] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.483130] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.483137] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.483141] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.483158] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.483167] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.483214] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560360332, cache_obj->added_lc()=false, cache_obj->get_object_id()=633, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.484067] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.484089] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=21][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.484886] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.484900] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.484905] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.484916] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.484925] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752484924, replica_locations:[]}) [2024-09-13 13:02:32.484938] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.484947] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.484956] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.484971] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:32.484979] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:32.484984] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:32.484993] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:32.485001] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.485006] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.485014] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:32.485018] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:32.485025] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:32.485032] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.485042] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:32.485046] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:32.485053] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:32.485056] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:32.485063] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:32.485067] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:32.485075] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:32.485083] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.485090] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:32.485094] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:32.485101] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:32.485109] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=48, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.485129] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] will sleep(sleep_us=48000, remain_us=755562, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.487812] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:32.492184] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:32.497098] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.497115] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.497121] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.497137] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.497147] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752497146, replica_locations:[]}) [2024-09-13 13:02:32.497159] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.497174] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:19, local_retry_times:19, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.497188] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.497196] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.497206] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.497210] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.497214] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.497226] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.497246] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.497283] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560374401, cache_obj->added_lc()=false, cache_obj->get_object_id()=636, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.497973] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.497993] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.498304] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.498327] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.498333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.498340] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.498350] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752498350, replica_locations:[]}) [2024-09-13 13:02:32.498362] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.498376] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.498385] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.498396] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:32.498403] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:32.498409] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:32.498421] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:32.498430] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.498449] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:32.498456] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:32.498460] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:32.498466] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:32.498476] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:32.498482] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:32.498486] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:32.498489] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:32.498493] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:32.498497] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:32.498502] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:32.498509] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:32.498518] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:32.498525] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:32.498529] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:32.498537] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:32.498546] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=20, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:32.498561] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] will sleep(sleep_us=20000, remain_us=1773243, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.519025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.519044] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.519051] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.519058] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.519067] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752519066, replica_locations:[]}) [2024-09-13 13:02:32.519082] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.519098] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:20, local_retry_times:20, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:32.519118] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.519126] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.519136] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.519144] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:32.519148] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:32.519159] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:32.519168] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.519203] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560396321, cache_obj->added_lc()=false, cache_obj->get_object_id()=638, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.519988] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.520009] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:32.520374] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.520391] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.520397] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.520406] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.520414] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752520413, replica_locations:[]}) [2024-09-13 13:02:32.520423] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:32.520430] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:32.520475] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1751328, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.533481] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=17][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4719, dropped:107, tid:20300}]) [2024-09-13 13:02:32.533674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.533704] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.533715] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.533726] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.533738] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752533738, replica_locations:[]}) [2024-09-13 13:02:32.533751] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.533774] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.533779] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.533797] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.533839] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560410956, cache_obj->added_lc()=false, cache_obj->get_object_id()=637, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.534836] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.535159] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.535175] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.535181] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.535188] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.535196] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752535196, replica_locations:[]}) [2024-09-13 13:02:32.535235] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=49000, remain_us=705456, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.541637] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.541964] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.541978] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.541984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.541997] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.542007] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752542007, replica_locations:[]}) [2024-09-13 13:02:32.542020] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.542038] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.542046] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.542066] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.542113] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560419232, cache_obj->added_lc()=false, cache_obj->get_object_id()=639, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.542819] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.543120] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.543135] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.543141] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.543153] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.543164] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752543163, replica_locations:[]}) [2024-09-13 13:02:32.543201] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1728603, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.558128] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.558150] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752558122) [2024-09-13 13:02:32.558159] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752458095, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.558179] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.558184] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.558188] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752558166) [2024-09-13 13:02:32.561796] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.563399] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.565336] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.565668] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.565682] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.565688] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.565694] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.565706] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752565705, replica_locations:[]}) [2024-09-13 13:02:32.565716] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.565734] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.565746] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.565764] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.565795] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560442914, cache_obj->added_lc()=false, cache_obj->get_object_id()=641, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.566501] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.566914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.566928] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.566934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.566945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.566952] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752566952, replica_locations:[]}) [2024-09-13 13:02:32.566990] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1704814, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.574511] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.576571] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.584417] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.584677] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.584694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.584701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.584713] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.584723] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752584723, replica_locations:[]}) [2024-09-13 13:02:32.584737] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.584757] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.584765] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.584787] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.584825] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560461944, cache_obj->added_lc()=false, cache_obj->get_object_id()=640, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.585739] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.585935] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.585951] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.585957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.585968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.585977] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752585977, replica_locations:[]}) [2024-09-13 13:02:32.586017] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=654674, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.590145] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.590544] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.590565] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.590571] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.590579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.590591] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752590591, replica_locations:[]}) [2024-09-13 13:02:32.590604] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.590623] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.590631] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.590648] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.590692] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560467810, cache_obj->added_lc()=false, cache_obj->get_object_id()=642, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.591408] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.591758] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.591777] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.591783] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.591791] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.591802] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752591802, replica_locations:[]}) [2024-09-13 13:02:32.591839] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1679964, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.609936] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.611600] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.615996] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.616378] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.616394] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.616400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.616410] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.616420] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752616419, replica_locations:[]}) [2024-09-13 13:02:32.616447] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.616465] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.616473] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.616494] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.616529] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560493647, cache_obj->added_lc()=false, cache_obj->get_object_id()=644, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.617350] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.617748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.617762] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.617768] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.617777] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.617786] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752617786, replica_locations:[]}) [2024-09-13 13:02:32.617831] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=25000, remain_us=1653972, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.624579] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=39] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:32.628236] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.629714] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.636199] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.636479] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.636496] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.636503] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.636510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.636523] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752636523, replica_locations:[]}) [2024-09-13 13:02:32.636534] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.636554] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.636570] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.636589] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.636627] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560513745, cache_obj->added_lc()=false, cache_obj->get_object_id()=643, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.637804] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.638004] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.638021] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.638027] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.638035] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.638047] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752638046, replica_locations:[]}) [2024-09-13 13:02:32.638088] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=602602, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.642978] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.643420] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.643454] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.643460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.643469] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.643478] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752643477, replica_locations:[]}) [2024-09-13 13:02:32.643488] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.643506] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.643514] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.643531] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.644258] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.644577] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.644590] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.644600] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.644610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.644621] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752644620, replica_locations:[]}) [2024-09-13 13:02:32.644659] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1627145, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.658190] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABE-0-0] [lt=72][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752657727) [2024-09-13 13:02:32.658195] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.658213] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752658189) [2024-09-13 13:02:32.658225] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752558165, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.658211] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABE-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203752657727}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:32.658251] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.658259] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.658264] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752658237) [2024-09-13 13:02:32.658276] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.658284] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.658287] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752658273) [2024-09-13 13:02:32.659224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.660833] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.670863] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.671288] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.671310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.671324] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.671347] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.671367] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752671366, replica_locations:[]}) [2024-09-13 13:02:32.671388] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.671416] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.672586] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=52][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.672942] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.672974] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.672980] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.672987] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.672995] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752672995, replica_locations:[]}) [2024-09-13 13:02:32.673045] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1598759, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.682301] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.684045] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.687898] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=16] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:32.689333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.689862] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.689902] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.689909] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.689917] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.689929] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752689928, replica_locations:[]}) [2024-09-13 13:02:32.689944] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.689967] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.691064] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.691285] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.691304] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.691310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.691318] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.691328] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752691327, replica_locations:[]}) [2024-09-13 13:02:32.691408] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=549283, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.692571] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=35] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:32.700233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.700607] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.700625] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.700636] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.700646] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.700660] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752700659, replica_locations:[]}) [2024-09-13 13:02:32.700674] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.700694] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.701641] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.702031] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.702052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.702062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.702079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.702091] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752702091, replica_locations:[]}) [2024-09-13 13:02:32.702139] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=28000, remain_us=1569665, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.709588] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.711298] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.727677] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:32.727709] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=1, size_used=0, mem_used=16637952) [2024-09-13 13:02:32.730317] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.730798] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.730818] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.730829] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.730840] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.730854] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752730854, replica_locations:[]}) [2024-09-13 13:02:32.730884] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.730910] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.731841] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.732223] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.732245] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.732255] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.732266] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.732278] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752732277, replica_locations:[]}) [2024-09-13 13:02:32.732322] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1539481, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.733745] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=16][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:0, dropped:13, tid:20197}]) [2024-09-13 13:02:32.737698] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.739322] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.743587] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.743863] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.743891] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.743898] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.743908] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.743919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752743919, replica_locations:[]}) [2024-09-13 13:02:32.743933] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.743955] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.743964] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.743983] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.744029] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560621148, cache_obj->added_lc()=false, cache_obj->get_object_id()=649, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.745008] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.745256] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.745274] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.745280] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.745289] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.745301] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752745301, replica_locations:[]}) [2024-09-13 13:02:32.745341] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=53000, remain_us=495349, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.758409] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.758431] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752758401) [2024-09-13 13:02:32.758452] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752658234, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.758473] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.758478] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.758483] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752758459) [2024-09-13 13:02:32.761178] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.761494] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.762308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.762337] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.762355] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.762367] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.762381] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752762380, replica_locations:[]}) [2024-09-13 13:02:32.762407] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.762434] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.762452] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.762475] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.762517] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560639634, cache_obj->added_lc()=false, cache_obj->get_object_id()=651, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.763011] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.763516] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.763892] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.763917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.763932] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.763944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.763957] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752763956, replica_locations:[]}) [2024-09-13 13:02:32.764005] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1507799, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.773239] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=25][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:32.793913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.794156] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.794521] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.794547] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.794564] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.794581] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.794599] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752794599, replica_locations:[]}) [2024-09-13 13:02:32.794622] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.794657] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.794672] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.794707] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.794757] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560671872, cache_obj->added_lc()=false, cache_obj->get_object_id()=653, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.795509] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.795820] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.796202] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.796232] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.796249] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.796265] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.796282] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752796282, replica_locations:[]}) [2024-09-13 13:02:32.796329] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1475475, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.798526] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.798783] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.798801] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.798816] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.798827] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.798841] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752798840, replica_locations:[]}) [2024-09-13 13:02:32.798861] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.798929] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.798940] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.798970] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.799018] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560676135, cache_obj->added_lc()=false, cache_obj->get_object_id()=652, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.800184] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.800456] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.800477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.800533] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=55] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.800549] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.800563] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752800563, replica_locations:[]}) [2024-09-13 13:02:32.800638] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1] will sleep(sleep_us=54000, remain_us=440053, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.814003] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.815716] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.827529] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.828171] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.828201] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.828213] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.828224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.828239] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752828238, replica_locations:[]}) [2024-09-13 13:02:32.828255] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.828281] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.828296] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.828323] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.828366] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560705483, cache_obj->added_lc()=false, cache_obj->get_object_id()=654, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.829325] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.829742] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.829763] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.829774] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.829790] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.829803] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752829802, replica_locations:[]}) [2024-09-13 13:02:32.829867] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1441937, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.840545] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:32.840588] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:32.851034] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.852669] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.854827] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.855064] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.855082] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.855089] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.855099] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.855112] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752855111, replica_locations:[]}) [2024-09-13 13:02:32.855125] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.855154] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.855161] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.855181] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.855221] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560732339, cache_obj->added_lc()=false, cache_obj->get_object_id()=655, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.856310] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.856531] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.856553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.856560] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.856568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.856578] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752856577, replica_locations:[]}) [2024-09-13 13:02:32.856627] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=384064, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.858312] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABF-0-0] [lt=46][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752857884) [2024-09-13 13:02:32.858339] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ABF-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203752857884}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:32.858376] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.858391] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.858400] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752858361) [2024-09-13 13:02:32.862079] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.862508] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.862536] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.862551] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.862569] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.862588] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752862587, replica_locations:[]}) [2024-09-13 13:02:32.862611] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.862639] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.862654] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.862693] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.862748] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560739862, cache_obj->added_lc()=false, cache_obj->get_object_id()=656, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.863948] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.864343] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.864370] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.864386] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.864403] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.864422] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752864421, replica_locations:[]}) [2024-09-13 13:02:32.864500] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1407303, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.864960] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B50-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:32.864977] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B50-0-0] [lt=16][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203752864563], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:32.865430] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE0-0-0] [lt=14][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203752865025, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035643, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203752864204}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:32.865472] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE0-0-0] [lt=41][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:32.865984] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE0-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:32.867554] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.869170] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.872251] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.873057] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.873509] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:32.887979] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:32.892955] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=47] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:32.897653] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.898107] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.898133] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.898144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.898156] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.898171] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752898170, replica_locations:[]}) [2024-09-13 13:02:32.898191] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.898214] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.898224] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.898247] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.898289] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560775406, cache_obj->added_lc()=false, cache_obj->get_object_id()=658, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.899191] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.899530] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.899553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.899563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.899574] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.899586] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752899586, replica_locations:[]}) [2024-09-13 13:02:32.899633] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=34000, remain_us=1372171, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.909369] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.911488] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.911800] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.912169] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.912188] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.912197] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.912214] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.912229] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752912228, replica_locations:[]}) [2024-09-13 13:02:32.912249] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.912275] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.912287] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.912318] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.912369] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560789486, cache_obj->added_lc()=false, cache_obj->get_object_id()=657, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.913552] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.913893] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.913914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.913923] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.913932] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.913947] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752913946, replica_locations:[]}) [2024-09-13 13:02:32.914004] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=326687, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.921970] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.923771] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.933798] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.934231] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.934248] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.934254] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.934262] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.934272] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752934271, replica_locations:[]}) [2024-09-13 13:02:32.934293] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.934314] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.934323] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.934343] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.934385] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560811501, cache_obj->added_lc()=false, cache_obj->get_object_id()=659, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.935297] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.935685] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.935705] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.935712] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.935719] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.935727] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752935726, replica_locations:[]}) [2024-09-13 13:02:32.935780] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1336023, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.958405] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC0-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752957961) [2024-09-13 13:02:32.958423] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:32.958444] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:32.958445] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC0-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203752957961}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:32.958467] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203752958418) [2024-09-13 13:02:32.958481] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752758459, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:32.958500] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.958509] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.958514] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752958489) [2024-09-13 13:02:32.958527] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.958531] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:32.958534] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203752958524) [2024-09-13 13:02:32.969679] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.970170] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.970419] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.970444] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.970450] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.970459] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.970482] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752970481, replica_locations:[]}) [2024-09-13 13:02:32.970495] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.970515] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.970524] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.970542] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.970589] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560847707, cache_obj->added_lc()=false, cache_obj->get_object_id()=660, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.970948] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.971332] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.971314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.971366] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=51][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.971372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.971379] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.971391] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752971390, replica_locations:[]}) [2024-09-13 13:02:32.971411] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:32.971429] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:32.971443] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:32.971458] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:32.971507] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560848625, cache_obj->added_lc()=false, cache_obj->get_object_id()=661, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:32.971719] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.971911] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.971928] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.971934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.971943] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.971960] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752971959, replica_locations:[]}) [2024-09-13 13:02:32.972003] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=268687, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:32.972394] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.972724] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.972739] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:32.972745] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:32.972752] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:32.972761] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203752972760, replica_locations:[]}) [2024-09-13 13:02:32.972806] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1298998, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:32.977606] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:32.979220] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.009015] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.009582] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.009603] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.009610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.009617] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.009631] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753009630, replica_locations:[]}) [2024-09-13 13:02:33.009655] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.009677] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.009683] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.009706] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.009754] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560886870, cache_obj->added_lc()=false, cache_obj->get_object_id()=663, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.010797] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.011190] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.011208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.011214] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.011224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.011234] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753011233, replica_locations:[]}) [2024-09-13 13:02:33.011283] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=37000, remain_us=1260520, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.029211] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.029602] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.029623] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.029637] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.029652] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.029667] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753029667, replica_locations:[]}) [2024-09-13 13:02:33.029681] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.029715] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.029724] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.029752] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.029797] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560906915, cache_obj->added_lc()=false, cache_obj->get_object_id()=662, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.029974] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.031125] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.031325] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.031342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.031349] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.031359] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.031372] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753031371, replica_locations:[]}) [2024-09-13 13:02:33.031420] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=209271, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:33.031561] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.033811] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.035364] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.048454] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.048901] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.048924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.048937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.048954] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.048967] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753048966, replica_locations:[]}) [2024-09-13 13:02:33.048980] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.049002] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.049011] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.049036] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.049076] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560926194, cache_obj->added_lc()=false, cache_obj->get_object_id()=664, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.049733] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1921) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=6] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-09-13 13:02:33.049752] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1462) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=16] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:33.049763] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1479) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=9] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:33.049773] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2678) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=9] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:33.049968] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.050310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.050330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.050343] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.050364] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.050380] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753050379, replica_locations:[]}) [2024-09-13 13:02:33.050428] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1221376, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.052227] INFO [SQL.PC] dump_all_objs (ob_plan_cache.cpp:2397) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=9] Dumping All Cache Objs(alloc_obj_list.count()=3, alloc_obj_list=[{obj_id:206, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:2, added_to_lc:true, mem_used:157887}, {obj_id:665, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}, {obj_id:666, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}]) [2024-09-13 13:02:33.052253] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2686) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=24] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:33.058621] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.058651] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753058613) [2024-09-13 13:02:33.058661] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203752958487, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.058682] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.058689] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.058694] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753058668) [2024-09-13 13:02:33.088056] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=27] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:33.088689] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.089229] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.089259] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.089271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.089284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.089301] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753089300, replica_locations:[]}) [2024-09-13 13:02:33.089317] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.089365] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.089388] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.089420] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.089513] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=44][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560966627, cache_obj->added_lc()=false, cache_obj->get_object_id()=666, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.089632] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.089933] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.089953] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.089969] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.089979] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.089993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753089992, replica_locations:[]}) [2024-09-13 13:02:33.090007] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.090024] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.090032] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.090068] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.090110] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6560967229, cache_obj->added_lc()=false, cache_obj->get_object_id()=665, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.090858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.090950] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.091262] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.091284] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.091226] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.091335] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=108][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.091342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.091349] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.091360] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753091359, replica_locations:[]}) [2024-09-13 13:02:33.091419] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1180385, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.091509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.091525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.091539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.091548] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.091556] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753091555, replica_locations:[]}) [2024-09-13 13:02:33.091598] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=149093, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:33.092549] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.093201] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=26] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.093242] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.093310] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:33.093721] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.094417] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=19] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.094485] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.094613] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.094703] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=6] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.094892] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=33] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.095681] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.095760] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.119381] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=15] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:33.123742] INFO [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:92) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7] run rewrite rule refresh task(rule_mgr_->tenant_id_=1) [2024-09-13 13:02:33.123777] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19][errcode=0] server is initiating(server_id=0, local_seq=50, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:33.124019] INFO [PALF] log_loop_ (log_loop_thread.cpp:155) [20122][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=22] LogLoopThread round_cost_time(us)(round_cost_time=2) [2024-09-13 13:02:33.124728] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, table_name.ptr()="data_size:14, data:5F5F616C6C5F7379735F73746174", ret=-5019) [2024-09-13 13:02:33.124752] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=23][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, ret=-5019) [2024-09-13 13:02:33.124761] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_sys_stat, db_name=oceanbase) [2024-09-13 13:02:33.124768] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_sys_stat) [2024-09-13 13:02:33.124774] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:33.124782] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:33.124788] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_sys_stat' doesn't exist [2024-09-13 13:02:33.124796] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:33.124800] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:33.124805] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:33.124809] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:33.124816] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:33.124820] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:33.124825] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=5][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:33.124835] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:33.124842] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.124848] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.124855] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:33.124860] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:33.124865] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:33.124870] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.124894] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=22][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:33.124908] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.124915] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.124919] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:33.124930] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:33.124938] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.124943] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C7F-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, aret=-5019, ret=-5019) [2024-09-13 13:02:33.124951] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:33.124957] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:33.124965] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:33.124971] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203753124584, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:33.124980] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:33.124984] WDIAG [SHARE] fetch_max_id (ob_max_id_fetcher.cpp:482) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] execute sql failed(sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:33.125036] WDIAG [SQL.QRR] fetch_max_rule_version (ob_udr_sql_service.cpp:141) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] failed to fetch max rule version(ret=-5019, tenant_id=1) [2024-09-13 13:02:33.125047] WDIAG [SQL.QRR] sync_rule_from_inner_table (ob_udr_mgr.cpp:251) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] failed to fetch max rule version(ret=-5019) [2024-09-13 13:02:33.125052] WDIAG [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:94) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] failed to sync rule from inner table(ret=-5019) [2024-09-13 13:02:33.130610] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.131094] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.131114] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.131121] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.131133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.131146] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753131146, replica_locations:[]}) [2024-09-13 13:02:33.131162] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.131196] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.131205] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.131233] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.131282] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561008396, cache_obj->added_lc()=false, cache_obj->get_object_id()=667, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.132240] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.132701] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.132717] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.132723] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.132731] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.132739] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753132739, replica_locations:[]}) [2024-09-13 13:02:33.132786] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=40000, remain_us=1139018, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.137409] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC82-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.149153] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.150812] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.150827] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.151103] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.151124] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.151131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.151138] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.151168] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753151167, replica_locations:[]}) [2024-09-13 13:02:33.151183] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.151206] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.151215] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.151236] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.151278] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561028396, cache_obj->added_lc()=false, cache_obj->get_object_id()=668, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.152359] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.152786] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.152807] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.152814] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.152821] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.152838] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753152837, replica_locations:[]}) [2024-09-13 13:02:33.152910] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=87780, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:33.154024] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.155575] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782DE-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.158523] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC1-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753158075) [2024-09-13 13:02:33.158552] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC1-0-0] [lt=19][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203753158075}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:33.158611] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.158626] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.158632] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753158598) [2024-09-13 13:02:33.159013] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB220B-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.159737] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB220F-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.160004] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2210-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.160461] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2214-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.160671] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2215-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.161128] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2219-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.161340] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB221A-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.161912] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB221E-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.162136] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB221F-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.162504] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2223-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.172969] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.173431] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.173455] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.173462] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.173471] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.173487] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753173486, replica_locations:[]}) [2024-09-13 13:02:33.173501] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.173524] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.173533] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.173553] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.173600] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561050716, cache_obj->added_lc()=false, cache_obj->get_object_id()=669, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.174680] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.174718] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=24] PNIO [ratelimit] time: 1726203753174717, bytes: 4199552, bw: 0.185016 MB/s, add_ts: 1000089, add_bytes: 194021 [2024-09-13 13:02:33.175041] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.175062] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.175072] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.175084] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.175095] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753175095, replica_locations:[]}) [2024-09-13 13:02:33.175143] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1096660, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.208463] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.208907] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C86-0-0] [lt=50] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.208934] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C88-0-0] [lt=19] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.210028] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.210333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E7-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.213116] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.213516] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.213566] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=48][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.213575] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.213583] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.213597] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753213596, replica_locations:[]}) [2024-09-13 13:02:33.213613] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.213638] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.213654] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.213677] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.213729] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561090846, cache_obj->added_lc()=false, cache_obj->get_object_id()=670, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.214847] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C82-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.215186] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.215204] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.215211] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.215221] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.215230] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753215229, replica_locations:[]}) [2024-09-13 13:02:33.215293] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0] will sleep(sleep_us=25398, remain_us=25398, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203753240690) [2024-09-13 13:02:33.216294] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=22][errcode=0] server is initiating(server_id=0, local_seq=51, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:33.216405] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.216622] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.216637] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.216644] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.216660] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.216672] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753216671, replica_locations:[]}) [2024-09-13 13:02:33.216685] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.216705] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.216713] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.216734] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.216773] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561093890, cache_obj->added_lc()=false, cache_obj->get_object_id()=671, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.217246] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=15] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:33.217274] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=26][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:33.217282] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:33.217289] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:33.217294] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:33.217299] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=5][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:33.217310] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=8][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:33.217318] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:33.217325] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:33.217328] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:33.217335] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=6][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:33.217343] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=8][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:33.217348] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:33.217354] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:33.217365] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:33.217372] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.217378] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.217382] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=3][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:33.217389] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, ret=-5019) [2024-09-13 13:02:33.217397] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:33.217401] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.217414] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:33.217427] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.217431] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.217445] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=14][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:33.217461] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=11][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:33.217469] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.217477] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C7F-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-09-13 13:02:33.217484] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:33.217492] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:33.217498] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:33.217506] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203753217160, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:33.217516] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:33.217521] WDIAG get_my_sql_result_ (ob_table_access_helper.h:435) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x2b07c6c55878, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, columns_str="zone") [2024-09-13 13:02:33.217535] WDIAG read_and_convert_to_values_ (ob_table_access_helper.h:332) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] fail to get ObMySQLResult(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:33.217585] WDIAG [COORDINATOR] get_self_zone_name (table_accessor.cpp:634) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] get zone from __all_server failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", columns=0x2b07c6c55878, where_condition="where svr_ip='172.16.51.35' and svr_port=2882", zone_name_holder=) [2024-09-13 13:02:33.217598] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:567) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] get self zone name failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:33.217607] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:576) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] zone name is empty(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:33.217613] WDIAG [COORDINATOR] refresh (ob_leader_coordinator.cpp:144) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] get all ls election reference info failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:33.217626] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:33.217796] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.218083] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.218097] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.218103] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.218118] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.218129] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753218129, replica_locations:[]}) [2024-09-13 13:02:33.218177] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=42000, remain_us=1053627, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.227034] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:305) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=9] ====== traversal_flush timer task ====== [2024-09-13 13:02:33.227058] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:338) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=20] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:33.227194] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:130) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=8] ====== checkpoint timer task ====== [2024-09-13 13:02:33.227225] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:193) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=24] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:33.227753] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:33.227790] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=21] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:33.228172] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:116) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=6] ====== [tabletchange] timer task ======(GC_CHECK_INTERVAL=5000000) [2024-09-13 13:02:33.228194] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:242) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=16] [tabletchange] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=3) [2024-09-13 13:02:33.229371] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=16] gc stale ls task succ [2024-09-13 13:02:33.229629] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:39) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=13] ====== [emptytablet] empty shell timer task ======(GC_EMPTY_TABLET_SHELL_INTERVAL=5000000) [2024-09-13 13:02:33.229647] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:107) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=14] [emptytablet] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=3) [2024-09-13 13:02:33.234073] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=16] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:33.235014] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:66) [20231][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=18] report RowHolderMapper summary info(count=0, bkt_cnt=248) [2024-09-13 13:02:33.238701] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:33.238724] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:33.238731] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:33.238738] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:33.240354] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:104) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=5] tx gc loop thread is running(MTL_ID()=1) [2024-09-13 13:02:33.240372] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:111) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=17] try gc retain ctx [2024-09-13 13:02:33.240790] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203753240691, ctx_timeout_ts=1726203753240691, worker_timeout_ts=1726203753240690, default_timeout=1000000) [2024-09-13 13:02:33.240813] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=22][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:33.240820] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:33.240834] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.240846] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:33.240862] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.240871] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.240922] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.240970] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561118087, cache_obj->added_lc()=false, cache_obj->get_object_id()=672, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.242056] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203753240690, ctx_timeout_ts=1726203753240690, worker_timeout_ts=1726203753240690, default_timeout=1000000) [2024-09-13 13:02:33.242075] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=18][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:33.242081] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=6][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:33.242089] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=8][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.242095] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.242108] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:33.242135] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:33.242150] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.242155] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.242180] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:33.242193] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=1][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:33.242212] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:33.242223] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.242228] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=5] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000740) [2024-09-13 13:02:33.242235] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C82-0-0] [lt=7][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:33.242242] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.242247] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:33.242252] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:33.242257] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] query failed(ret=-4012, conn=0x2b07a13e03a0, start=1726203751241477, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.242267] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] read failed(ret=-4012) [2024-09-13 13:02:33.242274] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.242313] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561119431, cache_obj->added_lc()=false, cache_obj->get_object_id()=674, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.242380] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:33.242394] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:33.242402] WDIAG [SHARE] get_snapshot_gc_scn (ob_global_stat_proxy.cpp:164) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:33.242413] WDIAG [STORAGE] get_global_info (ob_tenant_freeze_info_mgr.cpp:811) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4012] fail to get global info(ret=-4012, tenant_id=1) [2024-09-13 13:02:33.242422] WDIAG [STORAGE] try_update_info (ob_tenant_freeze_info_mgr.cpp:954) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] failed to get global info(ret=-4012) [2024-09-13 13:02:33.242429] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1008) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4012] fail to try update info(tmp_ret=-4012, tmp_ret="OB_TIMEOUT") [2024-09-13 13:02:33.242982] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=16] PNIO [ratelimit] time: 1726203753242981, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007614, add_bytes: 0 [2024-09-13 13:02:33.257782] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:33.257808] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=24][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:33.257816] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:33.257823] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:33.257829] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:33.257833] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:33.257839] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:33.257844] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:33.257848] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:33.257853] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:33.257857] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:33.257862] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=5][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:33.257867] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:33.257872] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:33.257885] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=13][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:33.257893] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=7][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:33.257896] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:33.257908] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:33.257915] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.257921] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.257926] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:33.257933] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=6][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:33.257941] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:33.257947] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.257961] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:33.257975] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.257982] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.257985] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:33.258006] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:33.258014] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C82-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.258023] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C82-0-0] [lt=8][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:33.258032] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:33.258037] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:33.258044] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:33.258049] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203753257551, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:33.258059] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:33.258064] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:33.258113] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=8] ls blacklist refresh finish(cost_time=1366) [2024-09-13 13:02:33.258475] INFO [DETECT] record_summary_info_and_logout_when_necessary_ (ob_lcl_batch_sender_thread.cpp:203) [20240][T1_LCLSender][T1][Y0-0000000000000000-0-0] [lt=33] ObLCLBatchSenderThread periodic report summary info(duty_ratio_percentage=0, total_constructed_detector=0, total_destructed_detector=0, total_alived_detector=0, _lcl_op_interval=30000, lcl_msg_map_.count()=0, *this={this:0x2b07c25fe2b0, is_inited:true, is_running:true, total_record_time:5010000, over_night_times:0}) [2024-09-13 13:02:33.258660] WDIAG [PALF] convert_to_ts (scn.cpp:265) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4016] invalid scn should not convert to ts (val_=18446744073709551615) [2024-09-13 13:02:33.258672] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:541) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0}, server_version_delta=1726203753258658, in_cluster_service=false, cluster_version={val:18446744073709551615, v:3}, min_cluster_version={val:18446744073709551615, v:3}, max_cluster_version={val:18446744073709551615, v:3}, get_cluster_version_err=0, cluster_version_delta=-1, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=1000000, local_cluster_version={val:0, v:0}, local_cluster_delta=1726203753258658, force_self_check=false, weak_read_refresh_interval=100000) [2024-09-13 13:02:33.258699] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.258713] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753258696) [2024-09-13 13:02:33.258720] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753058668, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.258738] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.258747] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.258751] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753258727) [2024-09-13 13:02:33.259859] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=3][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:33.260014] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C8B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.260287] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.260306] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.260319] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.260333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.260334] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.260366] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=52, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:33.260568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.260582] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.260593] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753260592, replica_locations:[]}) [2024-09-13 13:02:33.260607] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.260631] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.260639] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.260654] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.260691] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561137809, cache_obj->added_lc()=false, cache_obj->get_object_id()=673, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.261352] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:33.261374] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=20][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:33.261381] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=6][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:33.261387] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:33.261393] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:33.261398] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:33.261404] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:33.261408] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:33.261413] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:33.261417] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:33.261422] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:33.261426] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:33.261431] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:33.261445] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:33.261453] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:33.261460] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.261465] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.261473] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:33.261477] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=3][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:33.261482] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:33.261486] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.261499] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:33.261512] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.261516] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=3][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:33.261520] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:33.261530] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:33.261534] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.261542] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.261551] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=8][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:33.261563] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:33.261575] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=11][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:33.261581] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:33.261586] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203753261248, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:33.261592] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:33.261596] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:33.261641] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:33.261652] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=10][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:33.261657] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:33.261662] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:33.261668] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:33.261675] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=6][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:33.261679] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8B-0-0] [lt=4][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:33.261793] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.261808] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.261818] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753261817, replica_locations:[]}) [2024-09-13 13:02:33.261862] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=1009941, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.268577] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.270072] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.288184] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=27] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:33.293714] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:33.305133] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.305403] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.305424] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.305458] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753305457, replica_locations:[]}) [2024-09-13 13:02:33.305484] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.305508] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.305517] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.305540] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.305587] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561182704, cache_obj->added_lc()=false, cache_obj->get_object_id()=675, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.306676] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.306909] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.306927] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.306936] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753306936, replica_locations:[]}) [2024-09-13 13:02:33.306986] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=964817, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.329676] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.331748] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.334509] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=20][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:8, tid:19944}]) [2024-09-13 13:02:33.341127] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:33.341137] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CCA-0-0] [lt=29][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203753341101}) [2024-09-13 13:02:33.341153] INFO [STORAGE.TRANS] statistics (ob_gts_source.cpp:70) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=25] gts statistics(tenant_id=1, gts_rpc_cnt=0, get_gts_cache_cnt=8897, get_gts_with_stc_cnt=0, try_get_gts_cache_cnt=0, try_get_gts_with_stc_cnt=0, wait_gts_elapse_cnt=0, try_wait_gts_elapse_cnt=0) [2024-09-13 13:02:33.341166] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:33.341178] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:33.349218] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=17] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:33.351233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.351489] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.351509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.351516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.351524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.351540] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753351538, replica_locations:[]}) [2024-09-13 13:02:33.351561] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.351587] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.351596] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.351617] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.351668] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561228783, cache_obj->added_lc()=false, cache_obj->get_object_id()=676, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.352709] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.352909] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.352929] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.352935] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.352943] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.352952] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753352951, replica_locations:[]}) [2024-09-13 13:02:33.353006] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=918798, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.358761] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.358827] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=65][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.358839] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753358721) [2024-09-13 13:02:33.359216] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC2-0-0] [lt=34][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753358221) [2024-09-13 13:02:33.359244] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC2-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203753358221}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:33.359264] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.359287] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753359257) [2024-09-13 13:02:33.359307] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753258727, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.359328] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:33.359359] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.359372] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.359384] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753359349) [2024-09-13 13:02:33.365451] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B51-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:33.365472] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B51-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203753365029], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:33.365934] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE1-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.366516] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE1-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203753366222, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035686, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203753365866}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:33.366546] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE1-0-0] [lt=29][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.392376] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.393932] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005B-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.398225] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.398468] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.398494] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.398501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.398510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.398523] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753398521, replica_locations:[]}) [2024-09-13 13:02:33.398538] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.398563] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.398575] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.398614] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.398713] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=51][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561275827, cache_obj->added_lc()=false, cache_obj->get_object_id()=677, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.400005] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.400208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.400227] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.400234] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.400241] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.400251] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753400250, replica_locations:[]}) [2024-09-13 13:02:33.400303] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=871501, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.429886] WDIAG [ARCHIVE] do_thread_task_ (ob_archive_sender.cpp:256) [20256][T1_ArcSender][T1][YB42AC103323-000621F920F60C7D-0-0] [lt=13][errcode=-4018] try free send task failed(ret=-4018) [2024-09-13 13:02:33.434660] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=24][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1279, tid:20197}]) [2024-09-13 13:02:33.443995] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690064-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.446542] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.446830] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.446856] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.446886] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.446898] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.446913] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753446911, replica_locations:[]}) [2024-09-13 13:02:33.446928] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.446961] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:46, local_retry_times:46, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.446985] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.446998] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.447013] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.447021] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.447030] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.447050] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.447068] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.447126] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561324239, cache_obj->added_lc()=false, cache_obj->get_object_id()=678, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.448234] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.448274] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=38][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.448370] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.448588] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.448608] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.448614] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.448621] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.448631] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753448630, replica_locations:[]}) [2024-09-13 13:02:33.448643] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.448659] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.448670] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.448687] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:33.448697] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:33.448706] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:33.448725] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:33.448735] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.448741] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.448747] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:33.448751] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:33.448759] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:33.448765] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.448775] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:33.448780] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:33.448783] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:33.448788] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:33.448793] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:33.448798] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:33.448808] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:33.448817] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.448822] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:33.448827] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:33.448835] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:33.448840] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=47, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.448856] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] will sleep(sleep_us=47000, remain_us=822947, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.459326] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.459348] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.459355] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753459311) [2024-09-13 13:02:33.481383] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:33.488306] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=47] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:33.494098] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=55] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:33.496088] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.496444] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.496470] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.496480] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.496495] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.496514] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753496513, replica_locations:[]}) [2024-09-13 13:02:33.496537] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.496570] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:47, local_retry_times:47, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.496593] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.496604] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.496619] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.496629] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.496639] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.496660] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.496675] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.496733] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561373845, cache_obj->added_lc()=false, cache_obj->get_object_id()=679, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.497769] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.497800] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.497912] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.498317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.498339] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.498346] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.498355] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.498365] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753498364, replica_locations:[]}) [2024-09-13 13:02:33.498378] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.498385] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.498395] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.498405] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:33.498418] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:33.498423] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:33.498459] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=35][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:33.498469] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.498475] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.498483] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:33.498488] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:33.498493] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:33.498500] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.498509] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:33.498514] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:33.498520] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:33.498525] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:33.498530] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:33.498537] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:33.498548] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:33.498557] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.498563] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:33.498568] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:33.498579] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:33.498602] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=48, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.498620] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] will sleep(sleep_us=48000, remain_us=773184, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.518750] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:33.529983] INFO [ARCHIVE] do_thread_task_ (ob_archive_sender.cpp:262) [20256][T1_ArcSender][T1][YB42AC103323-000621F920F60C7D-0-0] [lt=27] ObArchiveSender is running(thread_index=0) [2024-09-13 13:02:33.546826] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.547108] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.547140] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.547147] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.547155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.547169] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753547167, replica_locations:[]}) [2024-09-13 13:02:33.547184] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.547204] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:48, local_retry_times:48, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.547222] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.547231] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.547241] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.547246] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.547250] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.547268] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.547278] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.547323] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561424439, cache_obj->added_lc()=false, cache_obj->get_object_id()=680, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.548355] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.548387] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.548486] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.548706] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.548722] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.548727] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.548736] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.548744] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753548744, replica_locations:[]}) [2024-09-13 13:02:33.548764] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.548772] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.548783] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.548794] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:33.548800] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:33.548808] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:33.548818] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:33.548829] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.548834] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.548845] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:33.548849] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:33.548854] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:33.548859] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.548868] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:33.548872] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:33.548886] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:33.548890] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:33.548895] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:33.548900] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:33.548910] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:33.548919] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.548929] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:33.548934] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:33.548939] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:33.548946] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=49, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.548963] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] will sleep(sleep_us=49000, remain_us=722840, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.558855] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC3-0-0] [lt=34][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753558377) [2024-09-13 13:02:33.558893] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC3-0-0] [lt=32][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203753558377}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:33.558916] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.558962] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753558908) [2024-09-13 13:02:33.558978] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753359324, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.559006] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.559018] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.559038] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753558992) [2024-09-13 13:02:33.598250] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.598732] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.598757] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.598774] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.598802] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.598817] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753598816, replica_locations:[]}) [2024-09-13 13:02:33.598833] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.598854] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:49, local_retry_times:49, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.598873] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.598889] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.598899] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.598906] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.598910] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.598930] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.598943] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.598990] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561476106, cache_obj->added_lc()=false, cache_obj->get_object_id()=681, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.599933] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.599961] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.600106] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.600292] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.600319] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.600327] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.600337] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.600348] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753600347, replica_locations:[]}) [2024-09-13 13:02:33.600370] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.600378] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.600387] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.600399] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:33.600408] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:33.600417] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:33.600430] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:33.600448] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.600454] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.600462] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:33.600471] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:33.600475] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:33.600484] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.600493] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:33.600501] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:33.600505] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:33.600509] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:33.600516] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:33.600521] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:33.600535] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:33.600544] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.600552] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:33.600558] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:33.600568] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:33.600574] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=50, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.600600] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] will sleep(sleep_us=50000, remain_us=671203, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.625218] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=31] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:33.650846] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.651189] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.651214] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.651222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.651230] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.651245] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753651244, replica_locations:[]}) [2024-09-13 13:02:33.651261] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.651279] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:50, local_retry_times:50, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.651294] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.651303] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.651322] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.651327] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.651331] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.651345] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.651355] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.651403] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561528518, cache_obj->added_lc()=false, cache_obj->get_object_id()=682, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.652334] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.652362] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.652477] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.652662] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.652677] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.652706] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=28] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.652713] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.652723] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753652722, replica_locations:[]}) [2024-09-13 13:02:33.652736] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.652744] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.652750] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.652760] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:33.652766] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:33.652772] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:33.652787] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:33.652799] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.652804] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.652810] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:33.652814] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:33.652818] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:33.652824] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.652832] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:33.652836] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:33.652841] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:33.652844] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:33.652849] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:33.652857] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:33.652867] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:33.652884] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.652892] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:33.652897] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:33.652905] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:33.652911] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=51, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.652929] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] will sleep(sleep_us=51000, remain_us=618875, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.658990] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.659009] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.659015] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753658973) [2024-09-13 13:02:33.692045] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] Cache replace map node details(ret=0, replace_node_count=0, replace_time=3651, replace_start_pos=503312, replace_num=62914) [2024-09-13 13:02:33.692082] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=34] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:33.694497] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=39] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:33.704208] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.704489] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.704513] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.704520] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.704532] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.704558] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753704556, replica_locations:[]}) [2024-09-13 13:02:33.704573] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.704592] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:51, local_retry_times:51, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.704625] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.704634] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.704646] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.704653] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.704658] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.704670] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.704678] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.704731] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561581843, cache_obj->added_lc()=false, cache_obj->get_object_id()=683, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.705675] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.705705] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.705842] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.706003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.706019] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.706025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.706033] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.706042] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753706041, replica_locations:[]}) [2024-09-13 13:02:33.706056] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.706071] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.706080] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.706092] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:33.706101] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:33.706109] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:33.706133] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:33.706144] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.706156] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:33.706167] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:33.706173] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:33.706182] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:33.706194] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:33.706207] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:33.706218] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:33.706228] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:33.706238] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:33.706246] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:33.706257] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:33.706270] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:33.706280] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:33.706288] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:33.706293] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:33.706301] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:33.706312] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=52, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:33.706330] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] will sleep(sleep_us=52000, remain_us=565474, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.727838] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:33.727901] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=30] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:33.758575] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.759002] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC4-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753758539) [2024-09-13 13:02:33.759037] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.759031] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC4-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203753758539}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:33.759056] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753759031) [2024-09-13 13:02:33.759071] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753558990, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.759100] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.759113] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.759125] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753759087) [2024-09-13 13:02:33.759141] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.759147] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.759152] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753759139) [2024-09-13 13:02:33.759005] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.759191] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=184][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.759205] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.759230] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.759258] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753759256, replica_locations:[]}) [2024-09-13 13:02:33.759280] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.759305] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:52, local_retry_times:52, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:33.759326] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.759339] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.759354] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.759364] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:33.759374] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:33.759416] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=30][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:33.759430] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.759493] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561636609, cache_obj->added_lc()=false, cache_obj->get_object_id()=684, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.760379] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.760405] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:33.760514] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.760729] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.760743] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.760748] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.760758] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.760767] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753760766, replica_locations:[]}) [2024-09-13 13:02:33.760780] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:33.760806] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:33.760845] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=53000, remain_us=510958, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.794131] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=13][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:33.814084] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.814473] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.814498] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.814509] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.814524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.814545] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753814543, replica_locations:[]}) [2024-09-13 13:02:33.814567] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.814598] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.814610] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.814654] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.814712] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561691825, cache_obj->added_lc()=false, cache_obj->get_object_id()=685, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.815806] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.815999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.816020] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.816030] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.816041] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.816057] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753816056, replica_locations:[]}) [2024-09-13 13:02:33.816123] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=455681, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.841727] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:33.841783] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:33.859226] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.859260] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=31][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753859218) [2024-09-13 13:02:33.859274] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753759084, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.859303] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.859309] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.859314] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753859284) [2024-09-13 13:02:33.866014] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B52-0-0] [lt=14] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:33.866036] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B52-0-0] [lt=22][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203753865539], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:33.866629] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE2-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.867360] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE2-0-0] [lt=23][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203753867013, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035695, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203753866205}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:33.867398] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE2-0-0] [lt=38][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:33.870342] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.870612] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.870634] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.870640] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.870650] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.870663] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753870662, replica_locations:[]}) [2024-09-13 13:02:33.870679] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.870700] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.870710] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.870739] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.870786] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561747904, cache_obj->added_lc()=false, cache_obj->get_object_id()=686, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.871852] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.872052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.872070] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.872077] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.872086] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.872097] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753872096, replica_locations:[]}) [2024-09-13 13:02:33.872147] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=399656, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.873298] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=18] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.873323] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=13] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.873836] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:33.892174] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=10] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:33.894890] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=52] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:33.900281] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5364, clean_start_pos=1006632, clean_num=125829) [2024-09-13 13:02:33.927412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.927659] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.927684] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.927695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.927719] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.927739] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753927737, replica_locations:[]}) [2024-09-13 13:02:33.927761] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.927793] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.927806] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.927835] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.927906] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561805020, cache_obj->added_lc()=false, cache_obj->get_object_id()=687, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.929231] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.929466] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.929501] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.929512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.929532] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.929548] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753929547, replica_locations:[]}) [2024-09-13 13:02:33.929614] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=56000, remain_us=342190, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:33.959309] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:33.959333] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:33.959336] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC5-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753958694) [2024-09-13 13:02:33.959357] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC5-0-0] [lt=19][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203753958694}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:33.959367] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=33][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203753959300) [2024-09-13 13:02:33.959383] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753859284, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:33.959410] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.959417] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.959428] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753959396) [2024-09-13 13:02:33.959464] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.959485] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:33.959495] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203753959460) [2024-09-13 13:02:33.985864] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.986296] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.986323] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.986334] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.986356] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.986374] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753986373, replica_locations:[]}) [2024-09-13 13:02:33.986397] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:33.986427] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:33.986447] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:33.986479] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:33.986540] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561863653, cache_obj->added_lc()=false, cache_obj->get_object_id()=688, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:33.987838] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:33.988065] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.988089] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:33.988101] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:33.988121] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:33.988137] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203753988136, replica_locations:[]}) [2024-09-13 13:02:33.988201] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=283603, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:34.033145] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.045427] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.045750] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.045780] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.045791] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.045805] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.045820] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754045819, replica_locations:[]}) [2024-09-13 13:02:34.045841] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.045870] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.045902] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=30][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.045927] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.045984] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561923097, cache_obj->added_lc()=false, cache_obj->get_object_id()=689, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.047240] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.047449] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.047495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=43][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.047510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.047525] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.047541] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754047540, replica_locations:[]}) [2024-09-13 13:02:34.047602] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=224201, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:34.059536] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.059578] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754059528) [2024-09-13 13:02:34.059596] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203753959393, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.059622] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.059638] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.059645] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754059608) [2024-09-13 13:02:34.083070] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D8E48926-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.083804] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D8E48926-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.092268] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=23] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:34.093364] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=13] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.093915] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=15] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.093997] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=13] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.094036] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.094900] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.094911] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.095025] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.095364] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=27] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.095767] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.100431] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=25][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.100692] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=73] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:34.105842] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.106192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.106218] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.106228] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.106240] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.106259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754106257, replica_locations:[]}) [2024-09-13 13:02:34.106284] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.106315] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.106327] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.106362] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.106422] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6561983537, cache_obj->added_lc()=false, cache_obj->get_object_id()=690, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.107690] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.107937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.107966] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.107976] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.107990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.108002] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754108001, replica_locations:[]}) [2024-09-13 13:02:34.108070] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=163734, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:34.119465] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=15] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:34.138034] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC83-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.158207] INFO [SQL.EXE] run2 (ob_maintain_dependency_info_task.cpp:227) [19986][MaintainDepInfo][T0][Y0-0000000000000000-0-0] [lt=23] [ASYNC TASK QUEUE](queue_.size()=0, sys_view_consistent_.size()=0) [2024-09-13 13:02:34.159243] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC6-0-0] [lt=19][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754158791) [2024-09-13 13:02:34.159275] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC6-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203754158791}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:34.159308] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.159327] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.159340] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754159294) [2024-09-13 13:02:34.164229] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=27] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:34.166239] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.167298] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.167681] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.167705] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.167715] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.167730] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.167748] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754167747, replica_locations:[]}) [2024-09-13 13:02:34.167769] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.167798] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.167811] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.167840] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.167910] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562045022, cache_obj->added_lc()=false, cache_obj->get_object_id()=691, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.169289] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.169535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.169559] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.169571] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.169586] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.169601] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754169600, replica_locations:[]}) [2024-09-13 13:02:34.169664] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1] will sleep(sleep_us=60000, remain_us=102139, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:34.182338] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=15] PNIO [ratelimit] time: 1726203754182337, bytes: 4259114, bw: 0.056373 MB/s, add_ts: 1007620, add_bytes: 59562 [2024-09-13 13:02:34.198579] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=36] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:34.212365] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E8-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.216912] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:34.227884] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=10] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:34.227984] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:34.229449] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=13] gc stale ls task succ [2024-09-13 13:02:34.229849] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=9] ====== check clog disk timer task ====== [2024-09-13 13:02:34.229867] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=16] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:34.229889] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=18] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:34.229895] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.230229] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.230245] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.230257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.230271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.230286] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754230285, replica_locations:[]}) [2024-09-13 13:02:34.230306] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.230333] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.230345] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.230374] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.230429] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562107544, cache_obj->added_lc()=false, cache_obj->get_object_id()=692, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.231408] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.231636] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.231656] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.231672] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.231685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.231699] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754231698, replica_locations:[]}) [2024-09-13 13:02:34.231760] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0] will sleep(sleep_us=40043, remain_us=40043, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203754271803) [2024-09-13 13:02:34.234191] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=17] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:34.238869] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:34.238897] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=25][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:34.238904] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:34.238911] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:34.245890] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.246362] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.247093] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.247407] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.247639] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.250594] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=25] PNIO [ratelimit] time: 1726203754250593, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007612, add_bytes: 0 [2024-09-13 13:02:34.259360] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.259380] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754259353) [2024-09-13 13:02:34.259389] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754059607, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.259407] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.259412] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.259417] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754259396) [2024-09-13 13:02:34.261767] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=5][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:34.261900] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.262192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.262209] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.262216] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.262224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.262256] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=0] server is initiating(server_id=0, local_seq=53, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:34.263326] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:34.263353] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=24][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:34.263363] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=10][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:34.263377] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=13][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:34.263386] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:34.263396] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:34.263406] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:34.263415] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=8][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:34.263423] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:34.263430] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:34.263452] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=21][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:34.263462] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:34.263470] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=8][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:34.263476] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=5][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:34.263489] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:34.263499] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.263509] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=8][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.263519] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:34.263526] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:34.263535] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=8][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:34.263543] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.263560] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=14][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:34.263578] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=14][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:34.263587] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:34.263594] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:34.263615] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:34.263631] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.263638] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:34.263650] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=10][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:34.263661] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=10][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:34.263667] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=5][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:34.263679] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=10][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203754263131, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:34.263692] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:34.263700] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:34.263773] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=12][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:34.263788] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=14][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:34.263797] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=8][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:34.263808] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=10][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:34.263820] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=9][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:34.263828] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:34.263835] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8C-0-0] [lt=7][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:34.271901] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203754271804, ctx_timeout_ts=1726203754271804, worker_timeout_ts=1726203754271803, default_timeout=1000000) [2024-09-13 13:02:34.271933] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=31][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:34.271945] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:34.271963] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.271978] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:34.271998] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.272018] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.272044] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.272087] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562149203, cache_obj->added_lc()=false, cache_obj->get_object_id()=693, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.273126] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203754271803, ctx_timeout_ts=1726203754271803, worker_timeout_ts=1726203754271803, default_timeout=1000000) [2024-09-13 13:02:34.273154] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=27][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:34.273165] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:34.273177] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.273187] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.273202] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=15][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:34.273234] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:34.273258] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=23][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.273267] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.273294] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:34.273309] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:34.273325] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:34.273337] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.273346] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=7] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000698) [2024-09-13 13:02:34.273355] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:34.273365] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.273374] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:34.273383] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:34.273395] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:34.273408] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.273454] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C83-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562150574, cache_obj->added_lc()=false, cache_obj->get_object_id()=694, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.273503] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=11][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:34.273513] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=9][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:34.273521] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=7][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:34.273531] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:34.273542] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:34.273551] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:34.273562] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=10] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001761) [2024-09-13 13:02:34.273572] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:34.273581] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001792) [2024-09-13 13:02:34.273592] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=10][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:34.273600] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:34.273608] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C83-0-0] [lt=8][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:34.273618] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:34.273629] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:34.273651] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=8] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:34.273661] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=9] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:34.275283] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.275562] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.275582] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.275597] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.275609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.275623] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754275622, replica_locations:[]}) [2024-09-13 13:02:34.275666] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1998006, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.275748] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.275925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.275943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.275952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.275962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.275978] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754275977, replica_locations:[]}) [2024-09-13 13:02:34.275991] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.276011] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.276020] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.276037] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.276064] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562153185, cache_obj->added_lc()=false, cache_obj->get_object_id()=695, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.276779] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.276975] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.277014] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.277025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.277036] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.277047] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754277047, replica_locations:[]}) [2024-09-13 13:02:34.277084] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1] will sleep(sleep_us=1000, remain_us=1996589, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.277747] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.277977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.277998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.278015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.278024] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.278034] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:34.278043] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:34.278050] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:34.278120] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.278220] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.278278] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278305] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.278314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.278322] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754278321, replica_locations:[]}) [2024-09-13 13:02:34.278335] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:34.278345] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:34.278396] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278411] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278421] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.278431] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.278456] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754278455, replica_locations:[]}) [2024-09-13 13:02:34.278470] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.278487] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.278496] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.278522] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.278549] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562155670, cache_obj->added_lc()=false, cache_obj->get_object_id()=696, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.278573] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:34.278584] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] [2024-09-13 13:02:34.278671] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.278950] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278961] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.278966] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.278972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.278979] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.278988] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:34.278995] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:34.279002] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:34.279070] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.279200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.279209] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.279217] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.279224] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.279231] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.279239] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:34.279243] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:34.279247] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:34.279306] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.279444] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.279453] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.279458] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.279466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.279474] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.279481] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:34.279486] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:34.279492] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:34.279501] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:34.279509] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:34.279513] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:34.279652] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.279845] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.279870] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.279894] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.279905] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.279918] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754279917, replica_locations:[]}) [2024-09-13 13:02:34.279955] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993717, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.282115] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.282355] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.282371] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.282381] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.282391] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.282407] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754282407, replica_locations:[]}) [2024-09-13 13:02:34.282455] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=46] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.282475] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.282489] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.282512] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.282539] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562159660, cache_obj->added_lc()=false, cache_obj->get_object_id()=697, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.283213] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.283477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.283496] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.283506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.283517] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.283534] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754283533, replica_locations:[]}) [2024-09-13 13:02:34.283570] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1990103, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.286735] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.286988] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.287009] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.287019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.287034] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.287080] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=40] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754287079, replica_locations:[]}) [2024-09-13 13:02:34.287098] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.287117] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.287126] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.287147] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.287173] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562164294, cache_obj->added_lc()=false, cache_obj->get_object_id()=698, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.287795] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.288063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.288092] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.288103] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.288115] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.288126] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754288126, replica_locations:[]}) [2024-09-13 13:02:34.288174] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1985498, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.292337] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=17] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:34.292361] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.292628] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.292647] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.292662] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.292676] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.292690] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754292689, replica_locations:[]}) [2024-09-13 13:02:34.292703] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.292721] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.292731] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.292752] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.292779] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562169900, cache_obj->added_lc()=false, cache_obj->get_object_id()=699, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.293586] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.293911] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.293933] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.293951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.293966] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.293982] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754293981, replica_locations:[]}) [2024-09-13 13:02:34.294032] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1979641, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.299176] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.299458] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.299481] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.299492] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.299503] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.299517] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754299516, replica_locations:[]}) [2024-09-13 13:02:34.299536] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.299558] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.299568] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.299587] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.299616] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562176737, cache_obj->added_lc()=false, cache_obj->get_object_id()=700, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.300322] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.300554] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.300573] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.300583] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.300594] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.300606] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754300605, replica_locations:[]}) [2024-09-13 13:02:34.300648] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1973025, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.301047] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:34.306833] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.307145] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.307166] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.307192] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.307204] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.307217] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754307216, replica_locations:[]}) [2024-09-13 13:02:34.307231] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.307249] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.307259] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.307283] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.307313] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562184434, cache_obj->added_lc()=false, cache_obj->get_object_id()=701, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.308066] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.308290] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.308308] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.308323] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.308334] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.308346] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754308345, replica_locations:[]}) [2024-09-13 13:02:34.308384] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1965288, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.315575] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.315892] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.315926] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.315940] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.315956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.315993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754315976, replica_locations:[]}) [2024-09-13 13:02:34.316017] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.316039] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.316050] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.316070] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.316109] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562193229, cache_obj->added_lc()=false, cache_obj->get_object_id()=702, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.316929] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.317133] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.317152] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.317162] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.317177] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.317189] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754317189, replica_locations:[]}) [2024-09-13 13:02:34.317232] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1956441, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.325429] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.325736] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.325755] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.325765] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.325776] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.325793] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754325793, replica_locations:[]}) [2024-09-13 13:02:34.325807] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.325826] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.325841] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.325865] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.325903] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562203024, cache_obj->added_lc()=false, cache_obj->get_object_id()=703, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.326696] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.326949] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.326979] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.326993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.327009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.327025] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754327024, replica_locations:[]}) [2024-09-13 13:02:34.327073] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1946599, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.332929] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=11] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:34.336273] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.336550] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.336577] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.336588] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.336600] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.336613] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754336612, replica_locations:[]}) [2024-09-13 13:02:34.336628] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.336648] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.336658] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.336678] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.336729] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562213848, cache_obj->added_lc()=false, cache_obj->get_object_id()=704, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.337525] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.337725] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.337753] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.337763] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.337775] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.337786] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754337786, replica_locations:[]}) [2024-09-13 13:02:34.337827] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1935846, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.342253] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:34.342280] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:34.342273] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CCF-0-0] [lt=19][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203754342231}) [2024-09-13 13:02:34.348010] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.348291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.348311] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.348322] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.348333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.348346] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754348345, replica_locations:[]}) [2024-09-13 13:02:34.348369] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.348389] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.348399] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.348422] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.348464] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562225583, cache_obj->added_lc()=false, cache_obj->get_object_id()=705, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.349306] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.349306] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=19] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:34.349521] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.349541] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.349551] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.349562] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.349574] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754349574, replica_locations:[]}) [2024-09-13 13:02:34.349620] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1924052, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.359382] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC7-0-0] [lt=33][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754358924) [2024-09-13 13:02:34.359405] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC7-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203754358924}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:34.359425] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.359461] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=36][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754359418) [2024-09-13 13:02:34.359477] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754259396, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.359495] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:34.359515] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.359524] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.359528] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754359504) [2024-09-13 13:02:34.360791] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.361130] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.361146] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.361152] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.361160] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.361168] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754361168, replica_locations:[]}) [2024-09-13 13:02:34.361190] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.361206] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.361215] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.361230] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.361262] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562238381, cache_obj->added_lc()=false, cache_obj->get_object_id()=706, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.362039] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.362282] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.362301] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.362307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.362314] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.362325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754362325, replica_locations:[]}) [2024-09-13 13:02:34.362391] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1911281, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.366455] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B53-0-0] [lt=19] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:34.366467] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B53-0-0] [lt=11][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203754366014], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:34.366929] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE3-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.367608] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE3-0-0] [lt=7][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203754367305, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035715, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203754366416}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:34.367632] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE3-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.374574] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.374922] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.374940] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.374946] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.374953] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.374961] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754374961, replica_locations:[]}) [2024-09-13 13:02:34.374974] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.374993] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.375007] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.375028] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.375705] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562252180, cache_obj->added_lc()=false, cache_obj->get_object_id()=707, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.376940] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.377184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.377203] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.377210] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.377217] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.377226] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754377225, replica_locations:[]}) [2024-09-13 13:02:34.377268] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1896405, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.377797] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=16] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.382252] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.390470] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.391138] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.391166] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.391173] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.391181] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.391190] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754391190, replica_locations:[]}) [2024-09-13 13:02:34.391205] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.391223] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.391232] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.391250] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.391287] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562268406, cache_obj->added_lc()=false, cache_obj->get_object_id()=708, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.391937] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16] ====== tenant freeze timer task ====== [2024-09-13 13:02:34.391967] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=19][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:34.392342] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.392608] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.392634] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.392640] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.392663] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.392679] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754392679, replica_locations:[]}) [2024-09-13 13:02:34.392724] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1880949, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.396962] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=26][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.406922] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.407337] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.407357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.407364] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.407371] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.407385] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754407385, replica_locations:[]}) [2024-09-13 13:02:34.407403] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.407422] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.407431] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.407463] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.407497] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562284616, cache_obj->added_lc()=false, cache_obj->get_object_id()=709, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.408445] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.408807] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.408825] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.408831] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.408842] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.408850] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754408849, replica_locations:[]}) [2024-09-13 13:02:34.408903] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1864769, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.424159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.424512] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.424536] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.424552] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.424566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.424582] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754424581, replica_locations:[]}) [2024-09-13 13:02:34.424600] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.424624] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.424635] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.424658] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.424697] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562301815, cache_obj->added_lc()=false, cache_obj->get_object_id()=710, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.425543] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.425960] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.425981] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.425988] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.425995] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.426010] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754426009, replica_locations:[]}) [2024-09-13 13:02:34.426054] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1847619, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.442248] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.442665] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.442691] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.442698] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.442714] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.442724] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754442723, replica_locations:[]}) [2024-09-13 13:02:34.442737] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.442754] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.442762] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.442779] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.442821] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562319939, cache_obj->added_lc()=false, cache_obj->get_object_id()=711, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.443666] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.444066] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.444102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.444116] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.444137] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.444194] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=51] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754444193, replica_locations:[]}) [2024-09-13 13:02:34.444243] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1829430, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.445910] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690065-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.459494] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC8-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754458995) [2024-09-13 13:02:34.459501] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.459513] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.459520] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754459487) [2024-09-13 13:02:34.459519] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC8-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203754458995}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:34.459534] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.459544] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754459529) [2024-09-13 13:02:34.459552] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754359492, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.459561] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.459571] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.459574] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754459559) [2024-09-13 13:02:34.461419] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.461730] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.461748] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.461753] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.461768] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.461777] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754461776, replica_locations:[]}) [2024-09-13 13:02:34.461789] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.461809] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.461817] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.461836] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.461871] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562338990, cache_obj->added_lc()=false, cache_obj->get_object_id()=712, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.462705] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.462943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.462964] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.462970] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.462986] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.462997] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754462996, replica_locations:[]}) [2024-09-13 13:02:34.463053] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1810619, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.473815] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.474193] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.475271] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=43][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.475545] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.475898] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.481226] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.481505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.481530] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.481542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.481552] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.481564] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754481564, replica_locations:[]}) [2024-09-13 13:02:34.481577] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.481595] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.481737] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=141][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.481752] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.481783] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562358902, cache_obj->added_lc()=false, cache_obj->get_object_id()=713, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.482496] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.482699] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.482718] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.482724] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.482734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.482742] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754482741, replica_locations:[]}) [2024-09-13 13:02:34.482780] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1790893, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.492432] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=29] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:34.501369] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:34.501945] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.502330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.502355] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.502361] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.502371] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.502383] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754502383, replica_locations:[]}) [2024-09-13 13:02:34.502396] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.502414] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.502422] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.502451] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.502483] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562379603, cache_obj->added_lc()=false, cache_obj->get_object_id()=714, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.503234] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.503655] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.503679] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.503685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.503695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.503706] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754503706, replica_locations:[]}) [2024-09-13 13:02:34.503747] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1769926, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.523949] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.524263] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.524287] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.524293] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.524303] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.524315] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754524314, replica_locations:[]}) [2024-09-13 13:02:34.524333] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.524354] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.524365] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.524388] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.524423] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562401543, cache_obj->added_lc()=false, cache_obj->get_object_id()=715, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.525235] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.525489] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.525507] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.525513] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.525523] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.525534] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754525533, replica_locations:[]}) [2024-09-13 13:02:34.525577] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1748095, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.536157] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=24][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1022, tid:19944}]) [2024-09-13 13:02:34.546783] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.547105] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.547131] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.547141] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.547152] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.547168] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754547167, replica_locations:[]}) [2024-09-13 13:02:34.547190] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.547208] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:21, local_retry_times:21, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.547223] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.547232] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.547245] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.547250] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.547255] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.547269] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.547279] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.547316] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562424435, cache_obj->added_lc()=false, cache_obj->get_object_id()=716, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.548066] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.548091] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.548169] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.548422] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.548448] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.548464] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.548474] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.548485] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754548484, replica_locations:[]}) [2024-09-13 13:02:34.548497] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.548506] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.548515] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.548526] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:34.548534] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:34.548539] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:34.548559] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:34.548569] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.548576] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.548584] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:34.548589] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:34.548593] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:34.548599] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.548608] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:34.548613] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:34.548617] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:34.548620] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:34.548624] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:34.548629] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:34.548639] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:34.548644] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.548648] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:34.548652] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:34.548661] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:34.548665] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=22, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.548680] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] will sleep(sleep_us=22000, remain_us=1724993, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.559602] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.559636] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754559595) [2024-09-13 13:02:34.559652] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754459558, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.559679] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.559691] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.559700] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754559665) [2024-09-13 13:02:34.570896] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.571270] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.571289] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.571295] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.571306] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.571323] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754571323, replica_locations:[]}) [2024-09-13 13:02:34.571337] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.571352] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:22, local_retry_times:22, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.571367] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.571376] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.571384] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.571392] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.571395] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.571409] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.571419] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.571471] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562448588, cache_obj->added_lc()=false, cache_obj->get_object_id()=717, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.572274] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.572300] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.572382] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.572800] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.572813] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.572818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.572826] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.572834] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754572833, replica_locations:[]}) [2024-09-13 13:02:34.572845] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.572853] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.572866] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.572886] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:34.572891] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:34.572895] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:34.572908] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:34.572918] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.572924] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.572931] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:34.572936] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:34.572943] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:34.572954] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.572962] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:34.572970] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:34.572974] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:34.572980] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:34.572985] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:34.572992] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:34.573002] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:34.573009] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.573014] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:34.573055] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=39][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:34.573086] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=30][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:34.573102] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=23, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.573119] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] will sleep(sleep_us=23000, remain_us=1700553, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.596329] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.596851] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.596872] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.596891] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.596898] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.596910] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754596909, replica_locations:[]}) [2024-09-13 13:02:34.596923] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.596940] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:23, local_retry_times:23, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.596955] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.596974] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.596985] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.596992] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.596996] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.597013] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.597026] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.597060] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562474178, cache_obj->added_lc()=false, cache_obj->get_object_id()=718, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.597910] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.597935] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.598029] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.598247] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.598267] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.598273] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.598279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.598287] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754598286, replica_locations:[]}) [2024-09-13 13:02:34.598299] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.598308] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.598317] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.598328] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:34.598336] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:34.598356] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:34.598371] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:34.598381] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.598387] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.598395] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:34.598402] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:34.598406] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:34.598411] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.598419] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:34.598424] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:34.598431] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:34.598453] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:34.598460] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:34.598469] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:34.598479] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:34.598486] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.598494] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:34.598499] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:34.598507] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:34.598511] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=24, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.598527] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] will sleep(sleep_us=24000, remain_us=1675146, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.622722] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.623117] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.623137] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.623144] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.623161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.623170] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754623170, replica_locations:[]}) [2024-09-13 13:02:34.623184] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.623200] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:24, local_retry_times:24, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.623215] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.623222] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.623230] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.623238] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.623241] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.623253] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.623263] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.623302] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562500420, cache_obj->added_lc()=false, cache_obj->get_object_id()=719, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.624313] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=149][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.624339] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.624430] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.624712] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.624727] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.624733] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.624740] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.624749] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754624748, replica_locations:[]}) [2024-09-13 13:02:34.624761] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.624773] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.624778] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.624790] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:34.624795] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:34.624800] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:34.624812] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:34.624821] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.624826] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.624831] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:34.624836] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:34.624841] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:34.624862] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.624869] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:34.624873] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:34.624894] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:34.624899] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:34.624903] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:34.624911] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:34.624920] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:34.624927] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.624935] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:34.624941] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:34.624949] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:34.624958] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=25, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.624975] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] will sleep(sleep_us=25000, remain_us=1648698, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.625927] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=55] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:34.650179] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.650508] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.650534] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.650541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.650549] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.650567] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754650567, replica_locations:[]}) [2024-09-13 13:02:34.650582] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.650599] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:25, local_retry_times:25, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.650615] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.650622] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.650633] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.650638] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.650642] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.650660] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.650673] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.650729] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562527829, cache_obj->added_lc()=false, cache_obj->get_object_id()=720, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.651548] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.651574] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.651657] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.651915] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.651938] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.651944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.651957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.651969] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754651968, replica_locations:[]}) [2024-09-13 13:02:34.651981] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.651988] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.651995] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.652006] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:34.652014] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:34.652022] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:34.652035] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:34.652045] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.652050] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.652058] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:34.652063] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:34.652070] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:34.652076] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.652084] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:34.652089] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:34.652093] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:34.652099] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:34.652104] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:34.652111] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:34.652123] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:34.652131] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.652139] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:34.652143] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:34.652148] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:34.652152] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=26, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.652166] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] will sleep(sleep_us=26000, remain_us=1621507, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.659633] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC9-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754659156) [2024-09-13 13:02:34.659654] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AC9-0-0] [lt=20][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203754659156}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:34.659675] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.659689] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.659695] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754659662) [2024-09-13 13:02:34.678406] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.678742] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.678765] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.678772] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.678785] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.678799] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754678798, replica_locations:[]}) [2024-09-13 13:02:34.678814] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.678833] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:26, local_retry_times:26, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.678861] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.678870] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.678891] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.678895] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.678899] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.678914] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.678925] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.678968] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562556086, cache_obj->added_lc()=false, cache_obj->get_object_id()=721, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.679888] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.679907] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.680004] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.680258] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.680273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.680281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.680291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.680301] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754680301, replica_locations:[]}) [2024-09-13 13:02:34.680313] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.680320] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.680329] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.680340] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:34.680348] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:34.680357] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:34.680370] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:34.680380] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.680386] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:34.680393] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:34.680399] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:34.680406] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:34.680411] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:34.680421] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:34.680425] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:34.680429] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:34.680433] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:34.680453] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:34.680457] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:34.680467] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:34.680476] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:34.680481] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:34.680488] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:34.680493] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:34.680498] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=27, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:34.680517] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] will sleep(sleep_us=27000, remain_us=1593156, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.692538] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=30] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:34.701760] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=34] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:34.707782] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.708123] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.708148] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.708155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.708167] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.708183] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754708182, replica_locations:[]}) [2024-09-13 13:02:34.708198] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.708221] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:27, local_retry_times:27, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:34.708238] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.708247] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.708258] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.708266] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:34.708270] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:34.708293] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:34.708304] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.708353] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562585468, cache_obj->added_lc()=false, cache_obj->get_object_id()=722, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.709427] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.709462] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=34][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:34.709588] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.709802] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.709819] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.709824] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.709834] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.709842] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754709842, replica_locations:[]}) [2024-09-13 13:02:34.709852] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:34.709915] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1563758, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.727964] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=13] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:34.728071] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=22] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:34.738129] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.738517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.738540] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.738549] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.738558] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.738568] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754738567, replica_locations:[]}) [2024-09-13 13:02:34.738582] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.738606] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.738615] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.738637] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.738682] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562615799, cache_obj->added_lc()=false, cache_obj->get_object_id()=723, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.739755] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.739965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.739981] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.739987] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.739994] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.740003] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754740002, replica_locations:[]}) [2024-09-13 13:02:34.740050] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1533623, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.759732] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.759756] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754759726) [2024-09-13 13:02:34.759765] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754559665, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.759784] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.759793] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.759798] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754759772) [2024-09-13 13:02:34.769233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.769596] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.769615] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.769621] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.769628] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.769640] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754769640, replica_locations:[]}) [2024-09-13 13:02:34.769690] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=48] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.769711] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:34.769728] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.769738] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.769768] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.769814] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562646931, cache_obj->added_lc()=false, cache_obj->get_object_id()=724, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.770749] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.771048] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.771064] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.771071] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.771078] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.771089] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754771089, replica_locations:[]}) [2024-09-13 13:02:34.771134] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1502539, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.786496] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:34.801310] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.801642] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.801662] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.801669] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.801679] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.801700] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754801699, replica_locations:[]}) [2024-09-13 13:02:34.801713] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.801733] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.801742] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.801762] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.801799] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562678916, cache_obj->added_lc()=false, cache_obj->get_object_id()=725, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.802613] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.802884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.802903] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.802909] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.802918] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.802929] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754802928, replica_locations:[]}) [2024-09-13 13:02:34.802971] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1470701, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.834205] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.834491] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.834513] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.834520] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.834531] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.834542] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754834541, replica_locations:[]}) [2024-09-13 13:02:34.834556] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.834578] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.834586] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.834608] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.834650] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562711766, cache_obj->added_lc()=false, cache_obj->get_object_id()=726, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.835635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.835902] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.835920] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.835926] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.835933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.835941] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754835940, replica_locations:[]}) [2024-09-13 13:02:34.835986] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1437686, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.842728] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:34.842770] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:34.859716] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACA-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754859294) [2024-09-13 13:02:34.859744] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACA-0-0] [lt=22][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203754859294}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:34.859774] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.859786] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.859793] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754859759) [2024-09-13 13:02:34.867007] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B54-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:34.867029] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B54-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203754866519], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:34.867544] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE4-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.868169] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.868271] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE4-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203754867975, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035724, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203754867666}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:34.868298] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE4-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:34.868488] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.868509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.868515] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.868524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.868536] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754868536, replica_locations:[]}) [2024-09-13 13:02:34.868551] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.868574] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.868583] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.868612] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.868657] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562745774, cache_obj->added_lc()=false, cache_obj->get_object_id()=727, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.869643] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.869925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.869943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.869949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.869957] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.869965] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754869964, replica_locations:[]}) [2024-09-13 13:02:34.870011] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1403661, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.872551] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.873144] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.873199] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:34.892629] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:34.902145] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=82] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:34.903201] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.903529] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.903549] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.903556] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.903563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.903573] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754903572, replica_locations:[]}) [2024-09-13 13:02:34.903586] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.903608] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.903617] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.903637] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.903677] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562780795, cache_obj->added_lc()=false, cache_obj->get_object_id()=728, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.904635] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.904927] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.904948] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.904955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.904962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.904970] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754904969, replica_locations:[]}) [2024-09-13 13:02:34.905017] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1368656, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.939252] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.939562] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.939588] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.939595] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.939606] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.939622] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754939621, replica_locations:[]}) [2024-09-13 13:02:34.939638] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.939685] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.939695] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.939729] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.939778] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562816895, cache_obj->added_lc()=false, cache_obj->get_object_id()=729, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.940791] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.941040] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.941061] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.941067] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.941074] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.941084] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754941083, replica_locations:[]}) [2024-09-13 13:02:34.941138] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1332535, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:34.959830] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:34.959850] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:34.959864] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754959822) [2024-09-13 13:02:34.959872] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754759772, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:34.959862] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACB-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203754959367) [2024-09-13 13:02:34.959904] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.959910] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.959899] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACB-0-0] [lt=35][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203754959367}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:34.959915] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754959889) [2024-09-13 13:02:34.959930] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.959934] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:34.959937] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203754959924) [2024-09-13 13:02:34.976313] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.976859] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.976887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.976893] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.976901] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.976914] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754976913, replica_locations:[]}) [2024-09-13 13:02:34.976930] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:34.976951] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:34.976961] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:34.976984] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:34.977030] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562854147, cache_obj->added_lc()=false, cache_obj->get_object_id()=730, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:34.978058] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:34.978367] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.978384] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:34.978390] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:34.978398] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:34.978407] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203754978406, replica_locations:[]}) [2024-09-13 13:02:34.978480] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1295192, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.014739] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.015276] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.015298] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.015304] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.015321] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.015336] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755015335, replica_locations:[]}) [2024-09-13 13:02:35.015352] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.015381] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.015391] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.015512] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.015562] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562892679, cache_obj->added_lc()=false, cache_obj->get_object_id()=731, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.016651] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.017059] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.017078] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.017084] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.017092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.017101] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755017100, replica_locations:[]}) [2024-09-13 13:02:35.017154] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1] will sleep(sleep_us=37000, remain_us=1256519, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.054401] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.054940] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.054962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.054969] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.054979] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.054993] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755054992, replica_locations:[]}) [2024-09-13 13:02:35.055011] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.055037] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.055047] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.055069] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.055117] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562932233, cache_obj->added_lc()=false, cache_obj->get_object_id()=732, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.056680] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.057141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.057169] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.057180] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.057196] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.057212] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755057211, replica_locations:[]}) [2024-09-13 13:02:35.057269] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=38000, remain_us=1216404, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.059899] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACC-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755059437) [2024-09-13 13:02:35.059936] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACC-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755059437}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.059959] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.060001] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755059952) [2024-09-13 13:02:35.060015] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203754959889, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.060043] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.060051] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.060059] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755060029) [2024-09-13 13:02:35.084404] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [19931][pnio1][T0][YB42AC103326-00062119D8E48926-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.092432] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=22] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.092667] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:35.093922] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=30] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.093966] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=15] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.094786] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.094896] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.095011] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=16] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.095363] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.095522] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.095516] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.095909] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=8] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.096010] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.096030] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.096036] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.096048] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.096062] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755096061, replica_locations:[]}) [2024-09-13 13:02:35.096079] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.096101] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.096110] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.096142] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.096191] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6562973308, cache_obj->added_lc()=false, cache_obj->get_object_id()=733, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.097380] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.097824] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.097849] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.097859] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.097871] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.097897] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755097896, replica_locations:[]}) [2024-09-13 13:02:35.097962] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1175710, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.102519] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:35.119564] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=19] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:35.137181] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.137688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.137718] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.137725] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.137735] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.137758] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755137757, replica_locations:[]}) [2024-09-13 13:02:35.137782] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.137805] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.137816] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.137836] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.137899] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563015015, cache_obj->added_lc()=false, cache_obj->get_object_id()=734, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.138622] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC84-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.139047] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.139392] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.139419] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.139429] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.139460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.139477] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755139475, replica_locations:[]}) [2024-09-13 13:02:35.139545] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1] will sleep(sleep_us=40000, remain_us=1134128, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.159982] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACD-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755159509) [2024-09-13 13:02:35.160019] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACD-0-0] [lt=34][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755159509}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.160048] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.160073] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.160089] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755160031) [2024-09-13 13:02:35.163522] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2225-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.164197] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2229-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.164497] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB222A-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.164927] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB222E-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.165174] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB222F-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.165600] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2233-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.165841] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2234-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.166196] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2238-0-0] [lt=15][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.166433] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2239-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.166773] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB223D-0-0] [lt=40][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.179805] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.180306] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.180334] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.180346] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.180366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.180404] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=28] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755180403, replica_locations:[]}) [2024-09-13 13:02:35.180431] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.180475] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.180491] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.180533] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.180599] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563057712, cache_obj->added_lc()=false, cache_obj->get_object_id()=735, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.181942] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=30][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.182275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.182299] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.182309] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.182320] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.182333] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755182332, replica_locations:[]}) [2024-09-13 13:02:35.182402] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1] will sleep(sleep_us=41000, remain_us=1091271, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.189959] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=19] PNIO [ratelimit] time: 1726203755189957, bytes: 4388234, bw: 0.122207 MB/s, add_ts: 1007620, add_bytes: 129120 [2024-09-13 13:02:35.214392] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782E9-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.217569] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=21] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:35.223680] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.224125] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=68][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.224151] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.224161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.224174] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.224191] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755224189, replica_locations:[]}) [2024-09-13 13:02:35.224218] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.224287] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.224305] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.224331] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.224394] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563101507, cache_obj->added_lc()=false, cache_obj->get_object_id()=736, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.225710] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.226066] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.226091] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.226101] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.226119] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.226136] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755226135, replica_locations:[]}) [2024-09-13 13:02:35.226205] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1047468, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.228050] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=12] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:35.228153] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:35.229514] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=18] gc stale ls task succ [2024-09-13 13:02:35.234303] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=30] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:35.239053] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:35.239075] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:35.239083] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:35.239091] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:35.258219] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=18] PNIO [ratelimit] time: 1726203755258213, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007620, add_bytes: 0 [2024-09-13 13:02:35.260094] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACE-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755259609) [2024-09-13 13:02:35.260098] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.260130] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=30][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755260091) [2024-09-13 13:02:35.260139] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755060027, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.260123] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACE-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755259609}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.260162] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.260169] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.260174] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755260149) [2024-09-13 13:02:35.260194] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.260202] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.260206] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755260191) [2024-09-13 13:02:35.263958] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=10][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:35.264148] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C8D-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.264536] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.264559] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.264566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.264576] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.264617] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=12][errcode=0] server is initiating(server_id=0, local_seq=54, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:35.265661] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=13] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:35.265688] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=24][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:35.265696] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:35.265703] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:35.265710] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:35.265714] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:35.265721] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:35.265733] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=11][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:35.265737] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:35.265747] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=9][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:35.265752] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:35.265762] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=9][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:35.265767] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:35.265776] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=8][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:35.265788] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=6][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:35.265798] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=10][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.265806] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.265816] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=9][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:35.265821] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:35.265827] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:35.265833] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.265851] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=13][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:35.265869] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=15][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:35.265884] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=14][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:35.265888] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:35.265900] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:35.265909] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.265915] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=6][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:35.265927] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=11][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:35.265933] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:35.265944] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=10][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:35.265950] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203755265484, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:35.265963] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=13][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:35.265968] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:35.266024] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=6][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:35.266037] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=12][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:35.266043] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:35.266048] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:35.266055] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:35.266066] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=11][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:35.266071] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8D-0-0] [lt=5][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:35.268412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.268848] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.268873] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.268902] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=28] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.268920] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.268942] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755268941, replica_locations:[]}) [2024-09-13 13:02:35.268966] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.268996] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.269011] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.269077] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.269134] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563146247, cache_obj->added_lc()=false, cache_obj->get_object_id()=737, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.270419] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.270897] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.270922] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.270933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.270952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.270965] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755270964, replica_locations:[]}) [2024-09-13 13:02:35.271034] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1] will sleep(sleep_us=43000, remain_us=1002639, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.292777] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:35.303048] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=74] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:35.314283] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.314884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.314913] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.314924] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.314944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.314971] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755314969, replica_locations:[]}) [2024-09-13 13:02:35.314997] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.315032] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.315047] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.315079] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.315141] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563192254, cache_obj->added_lc()=false, cache_obj->get_object_id()=738, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.316499] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.316938] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.316963] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.316974] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.316985] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.316997] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755316997, replica_locations:[]}) [2024-09-13 13:02:35.317068] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1] will sleep(sleep_us=44000, remain_us=956605, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.343355] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:35.343387] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:35.343371] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CD3-0-0] [lt=17][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203755343320}) [2024-09-13 13:02:35.349396] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=12] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:35.360278] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.360316] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=36][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755360269) [2024-09-13 13:02:35.360331] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755260147, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.360362] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.360378] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.360386] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755360343) [2024-09-13 13:02:35.361305] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.362099] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.362126] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.362134] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.362143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.362163] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755362162, replica_locations:[]}) [2024-09-13 13:02:35.362181] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.362208] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.362219] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.362240] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.362292] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563239409, cache_obj->added_lc()=false, cache_obj->get_object_id()=739, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.363700] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.364134] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.364153] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.364160] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.364173] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.364184] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755364183, replica_locations:[]}) [2024-09-13 13:02:35.364239] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=909434, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.367499] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B55-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:35.367522] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B55-0-0] [lt=22][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203755367053], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:35.368034] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE5-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.368676] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE5-0-0] [lt=16][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203755368344, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035768, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203755367928}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:35.368703] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE5-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.409490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.409909] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.409943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.409954] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.409971] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.409991] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755409989, replica_locations:[]}) [2024-09-13 13:02:35.410015] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.410045] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.410056] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.410097] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.410149] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563287265, cache_obj->added_lc()=false, cache_obj->get_object_id()=740, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.411286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.411705] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.411729] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.411735] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.411750] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.411761] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755411760, replica_locations:[]}) [2024-09-13 13:02:35.411822] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=861851, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.448003] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690066-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.458064] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.458519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.458560] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.458570] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.458579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.458593] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755458592, replica_locations:[]}) [2024-09-13 13:02:35.458606] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.458646] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.458661] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.458683] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.458744] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563335860, cache_obj->added_lc()=false, cache_obj->get_object_id()=741, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.459873] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.460156] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACF-0-0] [lt=43][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755459752) [2024-09-13 13:02:35.460191] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.460188] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ACF-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755459752}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.460208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.460202] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:35.460232] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.460215] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.460243] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.460252] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.460254] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755460253, replica_locations:[]}) [2024-09-13 13:02:35.460258] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755460218) [2024-09-13 13:02:35.460323] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=813350, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.468665] INFO [LIB] log_compress_loop_ (ob_log_compressor.cpp:393) [19885][SyslogCompress][T0][Y0-0000000000000000-0-0] [lt=61] log compressor cycles once. (ret=0, cost_time=0, compressed_file_count=0, deleted_file_count=0, disk_remaining_size=182289354752) [2024-09-13 13:02:35.492889] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=31] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:35.496815] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=27][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:35.503662] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=81] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:35.507548] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.507997] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.508018] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.508025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.508033] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.508044] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755508044, replica_locations:[]}) [2024-09-13 13:02:35.508065] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.508085] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.508092] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.508121] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.508169] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563385286, cache_obj->added_lc()=false, cache_obj->get_object_id()=742, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.509166] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.509485] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.509511] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.509519] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.509529] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.509542] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755509541, replica_locations:[]}) [2024-09-13 13:02:35.509605] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=764067, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.533080] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=54][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:35.557814] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.558252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.558274] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.558280] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.558289] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.558301] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755558300, replica_locations:[]}) [2024-09-13 13:02:35.558319] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.558344] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.558356] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.558376] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.558422] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563435539, cache_obj->added_lc()=false, cache_obj->get_object_id()=743, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.559396] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.559724] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.559754] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.559760] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.559768] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.559776] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755559776, replica_locations:[]}) [2024-09-13 13:02:35.559824] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=713849, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.560273] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.560316] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755560265) [2024-09-13 13:02:35.560330] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755360342, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.560350] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.560363] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.560373] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755560339) [2024-09-13 13:02:35.609063] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.609846] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.609870] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.609918] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=47] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.609928] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.609947] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755609946, replica_locations:[]}) [2024-09-13 13:02:35.609965] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.609991] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.610003] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.610059] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.610111] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563487227, cache_obj->added_lc()=false, cache_obj->get_object_id()=744, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.611188] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.611574] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.611592] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.611598] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.611610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.611619] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755611619, replica_locations:[]}) [2024-09-13 13:02:35.611667] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=662005, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.626694] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=39] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:35.637528] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=21][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:737, tid:19944}]) [2024-09-13 13:02:35.660342] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.660365] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.660372] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755660326) [2024-09-13 13:02:35.660404] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD0-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755659904) [2024-09-13 13:02:35.660456] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.660447] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD0-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755659904}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.660475] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755660451) [2024-09-13 13:02:35.660484] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755560338, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.660498] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.660503] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.660506] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755660495) [2024-09-13 13:02:35.661891] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.662422] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.662465] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=41][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.662497] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=30] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.662514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.662538] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755662537, replica_locations:[]}) [2024-09-13 13:02:35.662564] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.662605] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=34][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:50, local_retry_times:50, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:35.662629] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.662640] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.662663] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.662677] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.662683] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:35.662701] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:35.662719] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.662774] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563539887, cache_obj->added_lc()=false, cache_obj->get_object_id()=745, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.663999] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.664033] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=32][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.664137] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.664592] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.664614] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.664625] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.664643] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.664658] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755664657, replica_locations:[]}) [2024-09-13 13:02:35.664680] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.664697] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.664708] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.664729] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:35.664739] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:35.664754] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:35.664775] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:35.664792] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.664801] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.664815] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:35.664825] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:35.664837] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:35.664847] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:35.664862] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:35.664870] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:35.664897] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:35.664904] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:35.664917] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:35.664925] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:35.664945] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:35.664959] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.664973] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:35.664986] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:35.665000] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:35.665009] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=51, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.665037] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18] will sleep(sleep_us=51000, remain_us=608636, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.692960] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=22] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:35.704230] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=67] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:35.708311] INFO [COMMON] generate_mod_stat_task (memory_dump.cpp:220) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] task info(*task={type_:2, dump_all_:false, p_context_:null, slot_idx_:0, dump_tenant_ctx_:false, tenant_id_:0, ctx_id_:0, p_chunk_:null}) [2024-09-13 13:02:35.708345] INFO [COMMON] handle (memory_dump.cpp:552) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=32] handle dump task(task={type_:2, dump_all_:false, p_context_:null, slot_idx_:0, dump_tenant_ctx_:false, tenant_id_:0, ctx_id_:0, p_chunk_:null}) [2024-09-13 13:02:35.708393] INFO [COMMON] update_check_range (ob_sql_mem_leak_checker.cpp:62) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] update_check_range(min_check_version=0, max_check_version=1, global_version=2) [2024-09-13 13:02:35.713727] INFO handle (memory_dump.cpp:679) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=15] statistics: tenant_cnt: 3, max_chunk_cnt: 524288 tenant_id ctx_id chunk_cnt label_cnt segv_cnt 1 0 83 158 0 1 5 1 4 0 1 7 1 2 0 1 8 49 1 0 1 12 1 1 0 1 16 3 3 0 500 0 48 205 0 500 7 3 4 0 500 8 50 1 0 500 9 2 1 0 500 10 10 2 0 500 16 1 1 0 500 17 7 7 0 500 22 3 49 0 500 23 16 10 0 508 0 3 8 0 508 8 8 1 0 cost_time: 5346 [2024-09-13 13:02:35.713783] INFO [LIB] operator() (ob_malloc_allocator.cpp:519) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=21] [MEMORY] tenant: 1, limit: 3,221,225,472 hold: 355,610,624 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 240,267,264 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= PLAN_CACHE_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= GLIBC hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 102,760,448 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= META_OBJ_CTX_ID hold_bytes= 2,097,152 limit= 644,245,080 [MEMORY] ctx_id= RPC_CTX_ID hold_bytes= 6,291,456 limit= 9,223,372,036,854,775,807 [MEMORY][PM] tid= 20282 used= 2,079,936 hold= 2,097,152 pm=0x2b07d4ed4340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20287 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5152340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20288 used= 2,079,936 hold= 2,097,152 pm=0x2b07d51d0340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20289 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5256340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20290 used= 2,079,936 hold= 2,097,152 pm=0x2b07d52d4340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20291 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5352340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20292 used= 2,079,936 hold= 4,194,304 pm=0x2b07d53d0340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20293 used= 2,079,936 hold= 2,097,152 pm=0x2b07d5456340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20294 used= 2,079,936 hold= 2,097,152 pm=0x2b07d54d4340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20300 used= 0 hold= 2,097,152 pm=0x2b07d5552340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20301 used= 0 hold= 2,097,152 pm=0x2b07d55d0340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= 20326 used= 2,079,936 hold= 2,097,152 pm=0x2b07d9656340 ctx_name=DEFAULT_CTX_ID [MEMORY][PM] tid= summary used= 20,799,360 hold= 27,262,976 [2024-09-13 13:02:35.713998] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=29] [MEMORY] tenant_id= 1 ctx_id= DEFAULT_CTX_ID hold= 240,267,264 used= 224,448,880 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 31,477,760 used= 31,458,304 count= 1 avg_used= 31,458,304 block_cnt= 1 chunk_cnt= 1 mod=ASHListBuffer [MEMORY] hold= 30,760,960 used= 30,611,008 count= 26 avg_used= 1,177,346 block_cnt= 26 chunk_cnt= 17 mod=MysqlRequesReco [MEMORY] hold= 20,807,680 used= 20,797,440 count= 10 avg_used= 2,079,744 block_cnt= 10 chunk_cnt= 10 mod=SqlExecutor [MEMORY] hold= 12,728,512 used= 12,613,593 count= 84 avg_used= 150,161 block_cnt= 28 chunk_cnt= 8 mod=OmtTenant [MEMORY] hold= 11,010,048 used= 10,768,896 count= 192 avg_used= 56,088 block_cnt= 192 chunk_cnt= 29 mod=[T]ObSessionDIB [MEMORY] hold= 10,719,232 used= 10,670,496 count= 10 avg_used= 1,067,049 block_cnt= 10 chunk_cnt= 7 mod=IoControl [MEMORY] hold= 8,777,728 used= 8,760,064 count= 2 avg_used= 4,380,032 block_cnt= 2 chunk_cnt= 2 mod=FreeTbltStream [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=RCSrv [MEMORY] hold= 8,540,160 used= 8,519,680 count= 1 avg_used= 8,519,680 block_cnt= 1 chunk_cnt= 1 mod=ArcFetchQueue [MEMORY] hold= 5,730,304 used= 5,701,632 count= 2 avg_used= 2,850,816 block_cnt= 2 chunk_cnt= 2 mod=ServerObjecPool [MEMORY] hold= 4,943,872 used= 4,915,456 count= 2 avg_used= 2,457,728 block_cnt= 2 chunk_cnt= 2 mod=HashBuckDmId [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDmChe [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMonMap [MEMORY] hold= 3,865,600 used= 3,705,600 count= 800 avg_used= 4,632 block_cnt= 800 chunk_cnt= 10 mod=CkptDgnMemCU [MEMORY] hold= 3,865,600 used= 3,705,600 count= 800 avg_used= 4,632 block_cnt= 800 chunk_cnt= 11 mod=CkptDgnMem [MEMORY] hold= 3,145,728 used= 2,228,224 count= 128 avg_used= 17,408 block_cnt= 128 chunk_cnt= 5 mod=SqlDtlQueue [MEMORY] hold= 3,047,424 used= 3,016,704 count= 4 avg_used= 754,176 block_cnt= 4 chunk_cnt= 4 mod=ResourceGroup [MEMORY] hold= 2,756,608 used= 2,720,005 count= 3 avg_used= 906,668 block_cnt= 3 chunk_cnt= 3 mod=SqlDtl1stBuf [MEMORY] hold= 2,650,112 used= 2,631,360 count= 1 avg_used= 2,631,360 block_cnt= 1 chunk_cnt= 1 mod=RpcStatInfo [MEMORY] hold= 2,379,776 used= 2,359,608 count= 1 avg_used= 2,359,608 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDTLINT [MEMORY] hold= 2,379,776 used= 2,359,608 count= 1 avg_used= 2,359,608 block_cnt= 1 chunk_cnt= 1 mod=MediumTabletMap [MEMORY] hold= 2,375,680 used= 2,359,536 count= 2 avg_used= 1,179,768 block_cnt= 2 chunk_cnt= 2 mod=HashBuckLCSta [MEMORY] hold= 2,248,704 used= 2,228,224 count= 1 avg_used= 2,228,224 block_cnt= 1 chunk_cnt= 1 mod=LogIOCb [MEMORY] hold= 2,169,056 used= 408,600 count= 24,952 avg_used= 16 block_cnt= 266 chunk_cnt= 2 mod=Number [MEMORY] hold= 1,670,976 used= 1,663,936 count= 7 avg_used= 237,705 block_cnt= 7 chunk_cnt= 2 mod=PoolFreeList [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=TabletMap [MEMORY] hold= 1,335,296 used= 1,331,072 count= 1 avg_used= 1,331,072 block_cnt= 1 chunk_cnt= 1 mod=TransService [MEMORY] hold= 1,294,336 used= 1,280,384 count= 2 avg_used= 640,192 block_cnt= 2 chunk_cnt= 1 mod=TransTimeWheel [MEMORY] hold= 1,294,336 used= 1,280,384 count= 2 avg_used= 640,192 block_cnt= 2 chunk_cnt= 1 mod=XATimeWheel [MEMORY] hold= 1,187,840 used= 1,179,768 count= 1 avg_used= 1,179,768 block_cnt= 1 chunk_cnt= 1 mod=RewriteRuleMap [MEMORY] hold= 1,187,840 used= 1,179,768 count= 1 avg_used= 1,179,768 block_cnt= 1 chunk_cnt= 1 mod=HashBuckPlanCac [MEMORY] hold= 1,015,808 used= 1,014,656 count= 4 avg_used= 253,664 block_cnt= 4 chunk_cnt= 3 mod=SQLSessionInfo [MEMORY] hold= 958,464 used= 950,272 count= 1 avg_used= 950,272 block_cnt= 1 chunk_cnt= 1 mod=IOWorkerLQ [MEMORY] hold= 933,888 used= 931,072 count= 1 avg_used= 931,072 block_cnt= 1 chunk_cnt= 1 mod=ArcSenderQueue [MEMORY] hold= 811,008 used= 802,648 count= 11 avg_used= 72,968 block_cnt= 11 chunk_cnt= 5 mod=CommSysVarFac [MEMORY] hold= 802,816 used= 800,000 count= 1 avg_used= 800,000 block_cnt= 1 chunk_cnt= 1 mod=SqlFltSpanRec [MEMORY] hold= 802,816 used= 800,000 count= 1 avg_used= 800,000 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanMon [MEMORY] hold= 786,688 used= 524,352 count= 33 avg_used= 15,889 block_cnt= 33 chunk_cnt= 4 mod=LogAlloc [MEMORY] hold= 663,552 used= 659,200 count= 1 avg_used= 659,200 block_cnt= 1 chunk_cnt= 1 mod=MulLevelQueue [MEMORY] hold= 663,552 used= 655,360 count= 1 avg_used= 655,360 block_cnt= 1 chunk_cnt= 1 mod=FetchLog [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=FrzTrigger [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=ElectTimer [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=DetectorTimer [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=CoordTF [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=DupTbLease [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=CoordTR [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=OBJLockGC [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=MdsT [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=MultiVersionGC [MEMORY] hold= 598,016 used= 590,232 count= 1 avg_used= 590,232 block_cnt= 1 chunk_cnt= 1 mod=DagNetIdMap [MEMORY] hold= 589,824 used= 524,288 count= 8 avg_used= 65,536 block_cnt= 8 chunk_cnt= 3 mod=[T]char [MEMORY] hold= 409,600 used= 401,408 count= 1 avg_used= 401,408 block_cnt= 1 chunk_cnt= 1 mod=ApplySrv [MEMORY] hold= 409,600 used= 401,408 count= 1 avg_used= 401,408 block_cnt= 1 chunk_cnt= 1 mod=ReplaySrv [MEMORY] hold= 409,600 used= 389,600 count= 4 avg_used= 97,400 block_cnt= 4 chunk_cnt= 2 mod=ResultSet [MEMORY] hold= 385,024 used= 375,520 count= 2 avg_used= 187,760 block_cnt= 2 chunk_cnt= 2 mod=bf_queue [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=ColUsagHashMap [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=DmlStatHashMap [MEMORY] hold= 262,144 used= 128,768 count= 16 avg_used= 8,048 block_cnt= 16 chunk_cnt= 5 mod=[T]ObPerfEventR [MEMORY] hold= 260,096 used= 253,952 count= 32 avg_used= 7,936 block_cnt= 32 chunk_cnt= 5 mod=SqlSession [MEMORY] hold= 207,072 used= 149,504 count= 258 avg_used= 579 block_cnt= 26 chunk_cnt= 3 mod=LSMap [MEMORY] hold= 204,800 used= 196,744 count= 1 avg_used= 196,744 block_cnt= 1 chunk_cnt= 1 mod=DagNetMap [MEMORY] hold= 204,800 used= 196,744 count= 1 avg_used= 196,744 block_cnt= 1 chunk_cnt= 1 mod=DagMap [MEMORY] hold= 204,800 used= 196,616 count= 1 avg_used= 196,616 block_cnt= 1 chunk_cnt= 1 mod=ResourMapLock [MEMORY] hold= 204,800 used= 196,616 count= 1 avg_used= 196,616 block_cnt= 1 chunk_cnt= 1 mod=T3MBucket [MEMORY] hold= 180,224 used= 131,072 count= 256 avg_used= 512 block_cnt= 24 chunk_cnt= 1 mod=TabletToLS [MEMORY] hold= 147,456 used= 139,264 count= 1 avg_used= 139,264 block_cnt= 1 chunk_cnt= 1 mod=RFLTaskQueue [MEMORY] hold= 118,400 used= 113,600 count= 25 avg_used= 4,544 block_cnt= 25 chunk_cnt= 6 mod=[T]ObTraceEvent [MEMORY] hold= 114,688 used= 106,496 count= 1 avg_used= 106,496 block_cnt= 1 chunk_cnt= 1 mod=SqlDtlMgr [MEMORY] hold= 99,520 used= 24,720 count= 372 avg_used= 66 block_cnt= 98 chunk_cnt= 20 mod=Coro [MEMORY] hold= 92,880 used= 1,280 count= 1,152 avg_used= 1 block_cnt= 12 chunk_cnt= 1 mod=CharsetUtil [MEMORY] hold= 90,112 used= 82,112 count= 1 avg_used= 82,112 block_cnt= 1 chunk_cnt= 1 mod=MetaMemMgr [MEMORY] hold= 89,408 used= 87,296 count= 11 avg_used= 7,936 block_cnt= 11 chunk_cnt= 5 mod=PlanVaIdx [MEMORY] hold= 89,408 used= 87,296 count= 11 avg_used= 7,936 block_cnt= 11 chunk_cnt= 5 mod=SqlSessiVarMap [MEMORY] hold= 81,280 used= 79,360 count= 10 avg_used= 7,936 block_cnt= 10 chunk_cnt= 5 mod=LSIter [MEMORY] hold= 73,728 used= 72,736 count= 1 avg_used= 72,736 block_cnt= 1 chunk_cnt= 1 mod=LogSharedQueueT [MEMORY] hold= 73,728 used= 65,408 count= 1 avg_used= 65,408 block_cnt= 1 chunk_cnt= 1 mod=DEVICE_MANAGER [MEMORY] hold= 73,728 used= 66,176 count= 1 avg_used= 66,176 block_cnt= 1 chunk_cnt= 1 mod=Rpc [MEMORY] hold= 65,536 used= 34,816 count= 4 avg_used= 8,704 block_cnt= 4 chunk_cnt= 3 mod=[T]ObDSActionAr [MEMORY] hold= 48,960 used= 44,840 count= 2 avg_used= 22,420 block_cnt= 2 chunk_cnt= 2 mod=DynamicFactor [MEMORY] hold= 45,056 used= 32,768 count= 64 avg_used= 512 block_cnt= 8 chunk_cnt= 3 mod=TxCtxMgr [MEMORY] hold= 43,360 used= 4,608 count= 192 avg_used= 24 block_cnt= 73 chunk_cnt= 19 mod=[T]MemoryContex [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HB_SERVICE [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=Autoincrement [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SuspectInfoBkt [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=DagWarnHisBkt [MEMORY] hold= 40,640 used= 39,680 count= 5 avg_used= 7,936 block_cnt= 5 chunk_cnt= 1 mod=ObDMMDL [MEMORY] hold= 32,768 used= 24,688 count= 2 avg_used= 12,344 block_cnt= 2 chunk_cnt= 2 mod=TLD_ClientTask [MEMORY] hold= 32,768 used= 17,408 count= 2 avg_used= 8,704 block_cnt= 2 chunk_cnt= 1 mod=ObLogEXTTP [MEMORY] hold= 32,768 used= 25,664 count= 1 avg_used= 25,664 block_cnt= 1 chunk_cnt= 1 mod=TSQLSessionMgr [MEMORY] hold= 24,576 used= 16,384 count= 1 avg_used= 16,384 block_cnt= 1 chunk_cnt= 1 mod=SlogWriteBuffer [MEMORY] hold= 24,576 used= 17,664 count= 1 avg_used= 17,664 block_cnt= 1 chunk_cnt= 1 mod=IO_MGR [MEMORY] hold= 19,200 used= 15,360 count= 20 avg_used= 768 block_cnt= 15 chunk_cnt= 8 mod=TGTimer [MEMORY] hold= 17,024 used= 8,448 count= 3 avg_used= 2,816 block_cnt= 2 chunk_cnt= 2 mod=BaseLogWriter [MEMORY] hold= 16,384 used= 8,448 count= 1 avg_used= 8,448 block_cnt= 1 chunk_cnt= 1 mod=PalfEnv [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=SlogNopLog [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TabletStats [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TLD_AssignedMgr [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=backupTaskSched [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TLD_TblCtxIMgr [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TLD_TableCtxMgr [MEMORY] hold= 16,000 used= 15,616 count= 2 avg_used= 7,808 block_cnt= 2 chunk_cnt= 2 mod=HashNodeLCSta [MEMORY] hold= 15,552 used= 12,096 count= 18 avg_used= 672 block_cnt= 16 chunk_cnt= 7 mod=[T]ObWarningBuf [MEMORY] hold= 15,232 used= 10,400 count= 25 avg_used= 416 block_cnt= 12 chunk_cnt= 4 mod=OMT_Worker [MEMORY] hold= 9,664 used= 9,264 count= 2 avg_used= 4,632 block_cnt= 2 chunk_cnt= 1 mod=WorkerMap [MEMORY] hold= 8,576 used= 8,192 count= 2 avg_used= 4,096 block_cnt= 2 chunk_cnt= 2 mod=LinearHashMapDi [MEMORY] hold= 8,576 used= 8,192 count= 2 avg_used= 4,096 block_cnt= 2 chunk_cnt= 2 mod=LinearHashMapCn [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=HTableLockMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=APPLY_STATUS [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ShareBlksMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=MdsDebugMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=DASIDCache [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=REPLAY_STATUS [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=LockWaitMgr [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=IORunners [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=LCLSender [MEMORY] hold= 8,000 used= 7,808 count= 1 avg_used= 7,808 block_cnt= 1 chunk_cnt= 1 mod=HashNodePlanCac [MEMORY] hold= 7,744 used= 5,632 count= 11 avg_used= 512 block_cnt= 9 chunk_cnt= 5 mod=SqlSessiQuerSql [MEMORY] hold= 6,928 used= 4,664 count= 11 avg_used= 424 block_cnt= 9 chunk_cnt= 6 mod=PackStateMap [MEMORY] hold= 6,864 used= 4,664 count= 11 avg_used= 424 block_cnt= 10 chunk_cnt= 4 mod=SequenceMap [MEMORY] hold= 6,864 used= 4,664 count= 11 avg_used= 424 block_cnt= 11 chunk_cnt= 6 mod=SequenceIdMap [MEMORY] hold= 6,864 used= 4,664 count= 11 avg_used= 424 block_cnt= 10 chunk_cnt= 5 mod=ContextsMap [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=ResGrpIdMap [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=PxPoolBkt [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=ResRuleIdMap [MEMORY] hold= 4,288 used= 4,096 count= 1 avg_used= 4,096 block_cnt= 1 chunk_cnt= 1 mod=MacroFile [MEMORY] hold= 3,904 used= 3,712 count= 1 avg_used= 3,712 block_cnt= 1 chunk_cnt= 1 mod=SqlDtlDfc [MEMORY] hold= 3,328 used= 1,792 count= 8 avg_used= 224 block_cnt= 1 chunk_cnt= 1 mod=LogIOTask [MEMORY] hold= 2,048 used= 1,856 count= 1 avg_used= 1,856 block_cnt= 1 chunk_cnt= 1 mod=LogIOWS [MEMORY] hold= 2,000 used= 1,800 count= 1 avg_used= 1,800 block_cnt= 1 chunk_cnt= 1 mod=PxResMgr [MEMORY] hold= 1,792 used= 1,600 count= 1 avg_used= 1,600 block_cnt= 1 chunk_cnt= 1 mod=LogPartFetCtxPo [MEMORY] hold= 1,744 used= 1,544 count= 1 avg_used= 1,544 block_cnt= 1 chunk_cnt= 1 mod=TabStatMgrLock [MEMORY] hold= 1,744 used= 1,352 count= 2 avg_used= 676 block_cnt= 2 chunk_cnt= 2 mod=DetectManager [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=DUP_LS_SET [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=IRMMemHashBuck [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=HashBucApiGroup [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=GROUP_INDEX_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=GCMemtableMap [MEMORY] hold= 1,280 used= 1,080 count= 1 avg_used= 1,080 block_cnt= 1 chunk_cnt= 1 mod=ModuleInitCtx [MEMORY] hold= 1,248 used= 1,056 count= 1 avg_used= 1,056 block_cnt= 1 chunk_cnt= 1 mod=LOG_HASH_MAP [MEMORY] hold= 1,120 used= 120 count= 5 avg_used= 24 block_cnt= 1 chunk_cnt= 1 mod=FreezeTask [MEMORY] hold= 1,024 used= 640 count= 2 avg_used= 320 block_cnt= 1 chunk_cnt= 1 mod=PoolArenaArray [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=DiskUsageTimer [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=TabletGC [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=CheckPointTimer [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=FlushTimer [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=TLD_TIMER [MEMORY] hold= 960 used= 768 count= 1 avg_used= 768 block_cnt= 1 chunk_cnt= 1 mod=TabletShell [MEMORY] hold= 848 used= 648 count= 1 avg_used= 648 block_cnt= 1 chunk_cnt= 1 mod=ResRuleInfo [MEMORY] hold= 752 used= 552 count= 1 avg_used= 552 block_cnt= 1 chunk_cnt= 1 mod=LSFreeze [MEMORY] hold= 576 used= 384 count= 1 avg_used= 384 block_cnt= 1 chunk_cnt= 1 mod=HAScheduler [MEMORY] hold= 576 used= 384 count= 1 avg_used= 384 block_cnt= 1 chunk_cnt= 1 mod=Scheduler [MEMORY] hold= 576 used= 384 count= 1 avg_used= 384 block_cnt= 1 chunk_cnt= 1 mod=MSTXCTX [MEMORY] hold= 544 used= 144 count= 2 avg_used= 72 block_cnt= 2 chunk_cnt= 1 mod=TntSrvObjPool [MEMORY] hold= 512 used= 120 count= 2 avg_used= 60 block_cnt= 2 chunk_cnt= 1 mod=UserResourceMgr [MEMORY] hold= 416 used= 16 count= 2 avg_used= 8 block_cnt= 2 chunk_cnt= 2 mod=ObLogEXTHandler [MEMORY] hold= 352 used= 112 count= 1 avg_used= 112 block_cnt= 1 chunk_cnt= 1 mod=Coordinator [MEMORY] hold= 256 used= 56 count= 1 avg_used= 56 block_cnt= 1 chunk_cnt= 1 mod=ResRuleInfoMap [MEMORY] hold= 208 used= 16 count= 1 avg_used= 16 block_cnt= 1 chunk_cnt= 1 mod=logservice [MEMORY] hold= 224,448,880 used= 219,256,758 count= 29,753 avg_used= 7,369 mod=SUMMARY [2024-09-13 13:02:35.714098] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=77] [MEMORY] tenant_id= 1 ctx_id= PLAN_CACHE_CTX_ID hold= 2,097,152 used= 229,504 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 212,864 used= 193,616 count= 6 avg_used= 32,269 block_cnt= 6 chunk_cnt= 1 mod=SqlPhyPlan [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=SqlPlanCache [MEMORY] hold= 6,528 used= 5,952 count= 3 avg_used= 1,984 block_cnt= 2 chunk_cnt= 1 mod=CreateContext [MEMORY] hold= 1,984 used= 1,600 count= 2 avg_used= 800 block_cnt= 1 chunk_cnt= 1 mod=PlanCache [MEMORY] hold= 229,504 used= 209,104 count= 12 avg_used= 17,425 mod=SUMMARY [2024-09-13 13:02:35.714114] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=6] [MEMORY] tenant_id= 1 ctx_id= GLIBC hold= 2,097,152 used= 80,992 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 80,896 used= 59,018 count= 263 avg_used= 224 block_cnt= 22 chunk_cnt= 1 mod=PlJit [MEMORY] hold= 96 used= 32 count= 1 avg_used= 32 block_cnt= 1 chunk_cnt= 1 mod=PlCodeGen [MEMORY] hold= 80,992 used= 59,050 count= 264 avg_used= 223 mod=SUMMARY [2024-09-13 13:02:35.714124] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=4] [MEMORY] tenant_id= 1 ctx_id= CO_STACK hold= 102,760,448 used= 99,606,528 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 99,606,528 used= 99,421,248 count= 193 avg_used= 515,136 block_cnt= 193 chunk_cnt= 49 mod=CoStack [MEMORY] hold= 99,606,528 used= 99,421,248 count= 193 avg_used= 515,136 mod=SUMMARY [2024-09-13 13:02:35.714144] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=5] [MEMORY] tenant_id= 1 ctx_id= META_OBJ_CTX_ID hold= 2,097,152 used= 401,408 limit= 644,245,080 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 401,408 used= 400,064 count= 2 avg_used= 200,032 block_cnt= 2 chunk_cnt= 1 mod=PoolFreeList [MEMORY] hold= 401,408 used= 400,064 count= 2 avg_used= 200,032 mod=SUMMARY [2024-09-13 13:02:35.714172] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=11] [MEMORY] tenant_id= 1 ctx_id= RPC_CTX_ID hold= 6,291,456 used= 909,312 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 638,976 used= 422,656 count= 26 avg_used= 16,256 block_cnt= 26 chunk_cnt= 3 mod=[L]OB_REMOTE_SY [MEMORY] hold= 196,608 used= 130,048 count= 8 avg_used= 16,256 block_cnt= 8 chunk_cnt= 3 mod=[L]OB_PX_TARGET [MEMORY] hold= 73,728 used= 48,768 count= 3 avg_used= 16,256 block_cnt= 3 chunk_cnt= 2 mod=[L]OB_REMOTE_EX [MEMORY] hold= 909,312 used= 601,472 count= 37 avg_used= 16,256 mod=SUMMARY [2024-09-13 13:02:35.714297] INFO [LIB] operator() (ob_malloc_allocator.cpp:519) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=14] [MEMORY] tenant: 500, limit: 9,223,372,036,854,775,807 hold: 540,209,152 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 169,332,736 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= GLIBC hold_bytes= 6,291,456 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 104,857,600 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LIBEASY hold_bytes= 4,194,304 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= LOGGER_CTX_ID hold_bytes= 20,971,520 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= RPC_CTX_ID hold_bytes= 2,097,152 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= PKT_NIO hold_bytes= 18,989,056 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= SCHEMA_SERVICE hold_bytes= 11,292,672 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= UNEXPECTED_IN_500 hold_bytes= 202,182,656 limit= 9,223,372,036,854,775,807 [2024-09-13 13:02:35.714525] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=13] [MEMORY] tenant_id= 500 ctx_id= DEFAULT_CTX_ID hold= 169,332,736 used= 164,500,608 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 33,574,912 used= 33,554,464 count= 1 avg_used= 33,554,464 block_cnt= 1 chunk_cnt= 1 mod=BloomFilter [MEMORY] hold= 12,779,520 used= 12,760,352 count= 1 avg_used= 12,760,352 block_cnt= 1 chunk_cnt= 1 mod=MemDumpContext [MEMORY] hold= 11,526,144 used= 11,273,688 count= 201 avg_used= 56,088 block_cnt= 201 chunk_cnt= 28 mod=[T]ObSessionDIB [MEMORY] hold= 10,792,960 used= 10,735,904 count= 11 avg_used= 975,991 block_cnt= 11 chunk_cnt= 8 mod=IoControl [MEMORY] hold= 9,457,664 used= 9,437,784 count= 1 avg_used= 9,437,784 block_cnt= 1 chunk_cnt= 1 mod=HashBuckInteChe [MEMORY] hold= 6,919,312 used= 6,816,688 count= 51 avg_used= 133,660 block_cnt= 31 chunk_cnt= 12 mod=PartitTableTask [MEMORY] hold= 5,218,304 used= 5,157,040 count= 6 avg_used= 859,506 block_cnt= 6 chunk_cnt= 4 mod=KvstCachWashStr [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=PxP2PDhMgrKey [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashPxBlooFilKe [MEMORY] hold= 4,739,072 used= 4,718,712 count= 1 avg_used= 4,718,712 block_cnt= 1 chunk_cnt= 1 mod=HashBucTenComMo [MEMORY] hold= 4,307,456 used= 4,278,711 count= 3 avg_used= 1,426,237 block_cnt= 3 chunk_cnt= 3 mod=SqlDtlMgr [MEMORY] hold= 4,232,192 used= 4,203,008 count= 6 avg_used= 700,501 block_cnt= 5 chunk_cnt= 4 mod=BaseLogWriter [MEMORY] hold= 4,214,896 used= 4,194,328 count= 2 avg_used= 2,097,164 block_cnt= 2 chunk_cnt= 2 mod=SerFuncRegHT [MEMORY] hold= 4,194,304 used= 4,176,267 count= 1 avg_used= 4,176,267 block_cnt= 1 chunk_cnt= 1 mod=SyslogCompress [MEMORY] hold= 3,997,696 used= 3,904,096 count= 12 avg_used= 325,341 block_cnt= 12 chunk_cnt= 6 mod=DedupQueue [MEMORY] hold= 2,379,776 used= 2,359,608 count= 1 avg_used= 2,359,608 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdUnitMa [MEMORY] hold= 2,167,552 used= 2,129,912 count= 7 avg_used= 304,273 block_cnt= 7 chunk_cnt= 4 mod=FixedQueue [MEMORY] hold= 2,061,920 used= 1,937,608 count= 491 avg_used= 3,946 block_cnt= 246 chunk_cnt= 3 mod=CharsetInit [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=DInsSstMgr [MEMORY] hold= 1,581,056 used= 1,572,904 count= 1 avg_used= 1,572,904 block_cnt= 1 chunk_cnt= 1 mod=IdConnMap [MEMORY] hold= 1,548,288 used= 1,507,118 count= 5 avg_used= 301,423 block_cnt= 5 chunk_cnt= 3 mod=LDIOSetup [MEMORY] hold= 1,416,192 used= 1,395,200 count= 35 avg_used= 39,862 block_cnt= 35 chunk_cnt= 12 mod=CommonArray [MEMORY] hold= 1,327,104 used= 1,114,112 count= 1,026 avg_used= 1,085 block_cnt= 97 chunk_cnt= 2 mod=TabletLSMap [MEMORY] hold= 1,286,144 used= 1,270,296 count= 2 avg_used= 635,148 block_cnt= 2 chunk_cnt= 2 mod=PxResMgr [MEMORY] hold= 1,230,208 used= 1,213,504 count= 10 avg_used= 121,350 block_cnt= 9 chunk_cnt= 8 mod=TenantCtxAlloca [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=TenantResCtrl [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=ConcurHashMap [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=ResRuleInfoMap [MEMORY] hold= 1,056,768 used= 1,048,608 count= 1 avg_used= 1,048,608 block_cnt= 1 chunk_cnt= 1 mod=TsSourceInfoMap [MEMORY] hold= 941,984 used= 925,792 count= 3 avg_used= 308,597 block_cnt= 3 chunk_cnt= 2 mod=Omt [MEMORY] hold= 936,832 used= 855,104 count= 431 avg_used= 1,984 block_cnt= 146 chunk_cnt= 22 mod=CreateContext [MEMORY] hold= 933,888 used= 917,576 count= 2 avg_used= 458,788 block_cnt= 2 chunk_cnt= 2 mod=CACHE_INST_MAP [MEMORY] hold= 761,856 used= 733,184 count= 4 avg_used= 183,296 block_cnt= 4 chunk_cnt= 3 mod=LightyQueue [MEMORY] hold= 709,152 used= 658,248 count= 11 avg_used= 59,840 block_cnt= 11 chunk_cnt= 6 mod=HashBucket [MEMORY] hold= 695,360 used= 670,480 count= 86 avg_used= 7,796 block_cnt= 86 chunk_cnt= 7 mod=MallocInfoMap [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=GEleTimer [MEMORY] hold= 647,168 used= 640,192 count= 1 avg_used= 640,192 block_cnt= 1 chunk_cnt= 1 mod=EventTimer [MEMORY] hold= 622,592 used= 619,008 count= 1 avg_used= 619,008 block_cnt= 1 chunk_cnt= 1 mod=SysTaskStatus [MEMORY] hold= 557,056 used= 295,936 count= 34 avg_used= 8,704 block_cnt= 34 chunk_cnt= 9 mod=[T]ObDSActionAr [MEMORY] hold= 544,448 used= 531,712 count= 67 avg_used= 7,936 block_cnt= 67 chunk_cnt= 3 mod=ModulePageAlloc [MEMORY] hold= 516,096 used= 458,752 count= 7 avg_used= 65,536 block_cnt= 7 chunk_cnt= 4 mod=[T]char [MEMORY] hold= 510,880 used= 442,352 count= 280 avg_used= 1,579 block_cnt= 85 chunk_cnt= 10 mod=tg [MEMORY] hold= 401,408 used= 393,256 count= 1 avg_used= 393,256 block_cnt= 1 chunk_cnt= 1 mod=TablStorStatMgr [MEMORY] hold= 369,088 used= 330,944 count= 7 avg_used= 47,277 block_cnt= 7 chunk_cnt= 3 mod=Rpc [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=register_tasks [MEMORY] hold= 303,104 used= 294,936 count= 1 avg_used= 294,936 block_cnt= 1 chunk_cnt= 1 mod=register_task [MEMORY] hold= 229,376 used= 224,000 count= 1 avg_used= 224,000 block_cnt= 1 chunk_cnt= 1 mod=BGTMonitor [MEMORY] hold= 221,184 used= 214,432 count= 1 avg_used= 214,432 block_cnt= 1 chunk_cnt= 1 mod=CompSuggestMgr [MEMORY] hold= 221,184 used= 212,992 count= 1 avg_used= 212,992 block_cnt= 1 chunk_cnt= 1 mod=TSWorker [MEMORY] hold= 215,024 used= 210,352 count= 27 avg_used= 7,790 block_cnt= 27 chunk_cnt= 8 mod=HashNode [MEMORY] hold= 212,992 used= 196,624 count= 2 avg_used= 98,312 block_cnt= 2 chunk_cnt= 1 mod=DdlQue [MEMORY] hold= 212,992 used= 207,168 count= 1 avg_used= 207,168 block_cnt= 1 chunk_cnt= 1 mod=TenantMutilAllo [MEMORY] hold= 212,992 used= 196,624 count= 2 avg_used= 98,312 block_cnt= 2 chunk_cnt= 1 mod=DRTaskMap [MEMORY] hold= 207,088 used= 149,504 count= 258 avg_used= 579 block_cnt= 108 chunk_cnt= 5 mod=LSLocationMap [MEMORY] hold= 197,344 used= 158,736 count= 12 avg_used= 13,228 block_cnt= 11 chunk_cnt= 6 mod=BucketLock [MEMORY] hold= 180,224 used= 172,064 count= 1 avg_used= 172,064 block_cnt= 1 chunk_cnt= 1 mod=TenantMBList [MEMORY] hold= 171,312 used= 167,080 count= 22 avg_used= 7,594 block_cnt= 22 chunk_cnt= 2 mod=TenaSpaTabIdSet [MEMORY] hold= 163,840 used= 147,792 count= 2 avg_used= 73,896 block_cnt= 2 chunk_cnt= 1 mod=HashNodNexWaiMa [MEMORY] hold= 155,648 used= 147,624 count= 1 avg_used= 147,624 block_cnt= 1 chunk_cnt= 1 mod=OB_DISK_REP [MEMORY] hold= 155,648 used= 148,032 count= 1 avg_used= 148,032 block_cnt= 1 chunk_cnt= 1 mod=CompEventMgr [MEMORY] hold= 155,648 used= 147,624 count= 1 avg_used= 147,624 block_cnt= 1 chunk_cnt= 1 mod=UsrRuleMap [MEMORY] hold= 155,312 used= 151,464 count= 20 avg_used= 7,573 block_cnt= 20 chunk_cnt= 3 mod=SysTableNameMap [MEMORY] hold= 147,456 used= 145,936 count= 2 avg_used= 72,968 block_cnt= 2 chunk_cnt= 2 mod=CommSysVarFac [MEMORY] hold= 147,456 used= 130,816 count= 2 avg_used= 65,408 block_cnt= 2 chunk_cnt= 2 mod=KVCACHE_HAZARD [MEMORY] hold= 139,264 used= 131,776 count= 1 avg_used= 131,776 block_cnt= 1 chunk_cnt= 1 mod=GtsTaskQueue [MEMORY] hold= 138,816 used= 134,912 count= 11 avg_used= 12,264 block_cnt= 11 chunk_cnt= 3 mod=SeArray [MEMORY] hold= 131,072 used= 130,384 count= 2 avg_used= 65,192 block_cnt= 2 chunk_cnt= 2 mod=LatchStat [MEMORY] hold= 124,464 used= 122,288 count= 16 avg_used= 7,643 block_cnt= 16 chunk_cnt= 4 mod=HashNodeConfCon [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 2 mod=MemMgrForLiboMa [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 2 mod=RefrFullScheMap [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 2 mod=TenaSchForCacMa [MEMORY] hold= 122,688 used= 118,904 count= 2 avg_used= 59,452 block_cnt= 2 chunk_cnt= 1 mod=MemMgrMap [MEMORY] hold= 114,688 used= 111,096 count= 1 avg_used= 111,096 block_cnt= 1 chunk_cnt= 1 mod=IndNameMap [MEMORY] hold= 114,688 used= 111,096 count= 1 avg_used= 111,096 block_cnt= 1 chunk_cnt= 1 mod=NonPartTenMap [MEMORY] hold= 114,688 used= 110,736 count= 2 avg_used= 55,368 block_cnt= 2 chunk_cnt= 1 mod=DepInfoTaskQ [MEMORY] hold= 114,336 used= 106,088 count= 2 avg_used= 53,044 block_cnt= 2 chunk_cnt= 2 mod=RetryCtrl [MEMORY] hold= 106,496 used= 86,408 count= 5 avg_used= 17,281 block_cnt= 5 chunk_cnt= 3 mod=HashBuckConfCon [MEMORY] hold= 106,496 used= 92,160 count= 2 avg_used= 46,080 block_cnt= 2 chunk_cnt= 2 mod=LDBlockBitMap [MEMORY] hold= 106,496 used= 98,312 count= 1 avg_used= 98,312 block_cnt= 1 chunk_cnt= 1 mod=TmpFileManager [MEMORY] hold= 105,664 used= 103,168 count= 13 avg_used= 7,936 block_cnt= 13 chunk_cnt= 6 mod=HashMapArray [MEMORY] hold= 98,304 used= 83,072 count= 2 avg_used= 41,536 block_cnt= 2 chunk_cnt= 1 mod=IO_MGR [MEMORY] hold= 95,408 used= 25,600 count= 353 avg_used= 72 block_cnt= 201 chunk_cnt= 19 mod=Coro [MEMORY] hold= 81,920 used= 74,064 count= 2 avg_used= 37,032 block_cnt= 2 chunk_cnt= 2 mod=io_trace_map [MEMORY] hold= 73,728 used= 69,664 count= 1 avg_used= 69,664 block_cnt= 1 chunk_cnt= 1 mod=SuperBlockBuffe [MEMORY] hold= 73,728 used= 65,600 count= 1 avg_used= 65,600 block_cnt= 1 chunk_cnt= 1 mod=TCREF [MEMORY] hold= 65,536 used= 63,272 count= 1 avg_used= 63,272 block_cnt= 1 chunk_cnt= 1 mod=SqlSessionSbloc [MEMORY] hold= 65,264 used= 63,096 count= 2 avg_used= 31,548 block_cnt= 2 chunk_cnt= 1 mod=ScheCacSysCacMa [MEMORY] hold= 65,024 used= 63,488 count= 8 avg_used= 7,936 block_cnt= 8 chunk_cnt= 6 mod=SessionInfoHash [MEMORY] hold= 61,568 used= 59,072 count= 13 avg_used= 4,544 block_cnt= 13 chunk_cnt= 6 mod=[T]ObTraceEvent [MEMORY] hold= 49,152 used= 37,032 count= 3 avg_used= 12,344 block_cnt= 3 chunk_cnt= 2 mod=ReferedMap [MEMORY] hold= 49,152 used= 32,768 count= 2 avg_used= 16,384 block_cnt= 2 chunk_cnt= 2 mod=CACHE_TNT_LST [MEMORY] hold= 49,152 used= 36,912 count= 2 avg_used= 18,456 block_cnt= 2 chunk_cnt= 2 mod=HashBuckSysConf [MEMORY] hold= 47,168 used= 45,056 count= 11 avg_used= 4,096 block_cnt= 11 chunk_cnt= 6 mod=LinearHashMapCn [MEMORY] hold= 47,168 used= 45,056 count= 11 avg_used= 4,096 block_cnt= 11 chunk_cnt= 7 mod=LinearHashMapDi [MEMORY] hold= 44,608 used= 34,560 count= 14 avg_used= 2,468 block_cnt= 11 chunk_cnt= 5 mod=TGTimer [MEMORY] hold= 44,192 used= 4,800 count= 200 avg_used= 24 block_cnt= 121 chunk_cnt= 17 mod=[T]MemoryContex [MEMORY] hold= 41,280 used= 37,264 count= 2 avg_used= 18,632 block_cnt= 2 chunk_cnt= 1 mod=TaskRunnerSer [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=DDLSpeedCtrl [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HasBucSerMigUnM [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucTenPooMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=TmpFileStoreMap [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucSerUniMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SessHoldMapBuck [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=SqlLoadData [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=ProxySessBuck [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdPoolMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucConPooMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HasBucConRefCoM [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucPooUniMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=ObLongopsMgr [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucNamPooMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucIdConfMa [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=HashBucNamConMa [MEMORY] hold= 40,384 used= 39,680 count= 5 avg_used= 7,936 block_cnt= 5 chunk_cnt= 3 mod=SqlSession [MEMORY] hold= 29,184 used= 25,320 count= 20 avg_used= 1,266 block_cnt= 14 chunk_cnt= 7 mod=ObGuard [MEMORY] hold= 25,632 used= 23,640 count= 10 avg_used= 2,364 block_cnt= 8 chunk_cnt= 5 mod=RpcProcessor [MEMORY] hold= 25,440 used= 24,264 count= 6 avg_used= 4,044 block_cnt= 5 chunk_cnt= 2 mod=ScheObSchemAren [MEMORY] hold= 25,104 used= 20,784 count= 2 avg_used= 10,392 block_cnt= 2 chunk_cnt= 2 mod=SqlNio [MEMORY] hold= 24,576 used= 17,408 count= 1 avg_used= 17,408 block_cnt= 1 chunk_cnt= 1 mod=SvrStartupHandl [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=leakMap [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=GrpNameIdMap [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=FuncRuleMap [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=GrpIdNameMap [MEMORY] hold= 24,576 used= 18,456 count= 1 avg_used= 18,456 block_cnt= 1 chunk_cnt= 1 mod=ServerCkptSlogH [MEMORY] hold= 24,576 used= 16,384 count= 1 avg_used= 16,384 block_cnt= 1 chunk_cnt= 1 mod=SlogWriteBuffer [MEMORY] hold= 24,352 used= 21,672 count= 2 avg_used= 10,836 block_cnt= 2 chunk_cnt= 2 mod=SchemaStatuMap [MEMORY] hold= 23,552 used= 20,352 count= 16 avg_used= 1,272 block_cnt= 16 chunk_cnt= 2 mod=IO_GROUP_MAP [MEMORY] hold= 18,928 used= 18,144 count= 4 avg_used= 4,536 block_cnt= 4 chunk_cnt= 2 mod=DeviceMng [MEMORY] hold= 17,200 used= 8,944 count= 43 avg_used= 208 block_cnt= 7 chunk_cnt= 2 mod=Scheduler [MEMORY] hold= 16,384 used= 9,392 count= 1 avg_used= 9,392 block_cnt= 1 chunk_cnt= 1 mod=TenCompProgMgr [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=GenSchemVersMap [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=HasBucTimZonInM [MEMORY] hold= 16,384 used= 8,992 count= 1 avg_used= 8,992 block_cnt= 1 chunk_cnt= 1 mod=IO_HEALTH [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=ServerLogPool [MEMORY] hold= 16,384 used= 9,336 count= 1 avg_used= 9,336 block_cnt= 1 chunk_cnt= 1 mod=InnerLobHash [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=StorageHADiag [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=LinkArray [MEMORY] hold= 16,384 used= 8,192 count= 1 avg_used= 8,192 block_cnt= 1 chunk_cnt= 1 mod=SlogNopLog [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=TbltRefreshMap [MEMORY] hold= 16,384 used= 12,296 count= 1 avg_used= 12,296 block_cnt= 1 chunk_cnt= 1 mod=ResourMapLock [MEMORY] hold= 16,384 used= 12,344 count= 1 avg_used= 12,344 block_cnt= 1 chunk_cnt= 1 mod=MemDumpMap [MEMORY] hold= 16,320 used= 15,904 count= 2 avg_used= 7,952 block_cnt= 2 chunk_cnt= 1 mod=UpgProcSet [MEMORY] hold= 16,128 used= 15,872 count= 2 avg_used= 7,936 block_cnt= 2 chunk_cnt= 2 mod=PlanVaIdx [MEMORY] hold= 16,000 used= 15,872 count= 2 avg_used= 7,936 block_cnt= 2 chunk_cnt= 1 mod=CommSysVarDefVa [MEMORY] hold= 11,520 used= 8,384 count= 16 avg_used= 524 block_cnt= 6 chunk_cnt= 3 mod=RpcBuffer [MEMORY] hold= 11,520 used= 9,216 count= 12 avg_used= 768 block_cnt= 9 chunk_cnt= 6 mod=timer [MEMORY] hold= 9,200 used= 9,064 count= 2 avg_used= 4,532 block_cnt= 2 chunk_cnt= 1 mod=RedisTypeMap [MEMORY] hold= 8,640 used= 8,048 count= 3 avg_used= 2,682 block_cnt= 3 chunk_cnt= 3 mod=TenantInfo [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=InneSqlConnPool [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=RsEventQueue [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=SqlSessiVarMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=SchemaRowKey [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=RpcKeepalive [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ObTsTenantInfoN [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerBlacklist [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerCidMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerRegioMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=ServerIdcMap [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=IORunners [MEMORY] hold= 8,000 used= 7,808 count= 1 avg_used= 7,808 block_cnt= 1 chunk_cnt= 1 mod=SessHoldMapNode [MEMORY] hold= 7,872 used= 7,808 count= 1 avg_used= 7,808 block_cnt= 1 chunk_cnt= 1 mod=HasNodTzInfM [MEMORY] hold= 6,816 used= 2,304 count= 24 avg_used= 96 block_cnt= 24 chunk_cnt= 3 mod=PThread [MEMORY] hold= 5,728 used= 5,336 count= 2 avg_used= 2,668 block_cnt= 2 chunk_cnt= 2 mod=DeadLock [MEMORY] hold= 5,376 used= 5,248 count= 2 avg_used= 2,624 block_cnt= 1 chunk_cnt= 1 mod=RootContext [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=RebuildCtx [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=HashBuckDmReq [MEMORY] hold= 4,832 used= 4,632 count= 1 avg_used= 4,632 block_cnt= 1 chunk_cnt= 1 mod=SqlPx [MEMORY] hold= 4,016 used= 3,816 count= 1 avg_used= 3,816 block_cnt= 1 chunk_cnt= 1 mod=RemMasterMap [MEMORY] hold= 4,016 used= 3,816 count= 1 avg_used= 3,816 block_cnt= 1 chunk_cnt= 1 mod=RecScheHisMap [MEMORY] hold= 3,264 used= 3,200 count= 1 avg_used= 3,200 block_cnt= 1 chunk_cnt= 1 mod=TenantTZ [MEMORY] hold= 2,960 used= 1,512 count= 7 avg_used= 216 block_cnt= 5 chunk_cnt= 3 mod=ObFuture [MEMORY] hold= 2,768 used= 2,704 count= 1 avg_used= 2,704 block_cnt= 1 chunk_cnt= 1 mod=LoggerAlloc [MEMORY] hold= 2,592 used= 2,016 count= 3 avg_used= 672 block_cnt= 3 chunk_cnt= 3 mod=[T]ObWarningBuf [MEMORY] hold= 2,576 used= 2,328 count= 1 avg_used= 2,328 block_cnt= 1 chunk_cnt= 1 mod=SqlCompile [MEMORY] hold= 2,528 used= 2,328 count= 1 avg_used= 2,328 block_cnt= 1 chunk_cnt= 1 mod=StorageS3 [MEMORY] hold= 2,112 used= 1,920 count= 1 avg_used= 1,920 block_cnt= 1 chunk_cnt= 1 mod=LobManager [MEMORY] hold= 1,648 used= 1,448 count= 1 avg_used= 1,448 block_cnt= 1 chunk_cnt= 1 mod=GtsRequestRpc [MEMORY] hold= 1,616 used= 1,416 count= 1 avg_used= 1,416 block_cnt= 1 chunk_cnt= 1 mod=GtiRequestRpc [MEMORY] hold= 1,568 used= 1,344 count= 1 avg_used= 1,344 block_cnt= 1 chunk_cnt= 1 mod=SchemaService [MEMORY] hold= 1,520 used= 1,328 count= 1 avg_used= 1,328 block_cnt= 1 chunk_cnt= 1 mod=GtiRpcProxy [MEMORY] hold= 1,520 used= 1,328 count= 1 avg_used= 1,328 block_cnt= 1 chunk_cnt= 1 mod=GtsRpcProxy [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=Autoincrement [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=TENANT_PLAN_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=INGRESS_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=IO_CHANNEL_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=GROUP_INDEX_MAP [MEMORY] hold= 1,472 used= 1,272 count= 1 avg_used= 1,272 block_cnt= 1 chunk_cnt= 1 mod=HashBucRefObj [MEMORY] hold= 1,328 used= 280 count= 5 avg_used= 56 block_cnt= 5 chunk_cnt= 2 mod=Log [MEMORY] hold= 1,280 used= 1,088 count= 1 avg_used= 1,088 block_cnt= 1 chunk_cnt= 1 mod=memdumpqueue [MEMORY] hold= 1,216 used= 960 count= 2 avg_used= 480 block_cnt= 2 chunk_cnt= 2 mod=TntResourceMgr [MEMORY] hold= 992 used= 112 count= 4 avg_used= 28 block_cnt= 4 chunk_cnt= 3 mod=KeepAliveServer [MEMORY] hold= 912 used= 264 count= 3 avg_used= 88 block_cnt= 3 chunk_cnt= 2 mod=DestKAState [MEMORY] hold= 896 used= 704 count= 1 avg_used= 704 block_cnt= 1 chunk_cnt= 1 mod=ScheMgrCacheMap [MEMORY] hold= 704 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=SqlString [MEMORY] hold= 704 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=SqlSessiQuerSql [MEMORY] hold= 704 used= 512 count= 1 avg_used= 512 block_cnt= 1 chunk_cnt= 1 mod=TsMgr [MEMORY] hold= 672 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=ContextsMap [MEMORY] hold= 672 used= 272 count= 2 avg_used= 136 block_cnt= 2 chunk_cnt= 1 mod=unknown [MEMORY] hold= 656 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=PackStateMap [MEMORY] hold= 624 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=SequenceIdMap [MEMORY] hold= 624 used= 424 count= 1 avg_used= 424 block_cnt= 1 chunk_cnt= 1 mod=SequenceMap [MEMORY] hold= 416 used= 32 count= 2 avg_used= 16 block_cnt= 2 chunk_cnt= 1 mod=CreateEntity [MEMORY] hold= 352 used= 160 count= 1 avg_used= 160 block_cnt= 1 chunk_cnt= 1 mod=OccamTimeGuard [MEMORY] hold= 272 used= 56 count= 1 avg_used= 56 block_cnt= 1 chunk_cnt= 1 mod=PxTargetMgr [MEMORY] hold= 128 used= 7 count= 1 avg_used= 7 block_cnt= 1 chunk_cnt= 1 mod=SqlExpr [MEMORY] hold= 164,500,608 used= 161,706,999 count= 4,154 avg_used= 38,928 mod=SUMMARY [2024-09-13 13:02:35.714653] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=100] [MEMORY] tenant_id= 500 ctx_id= GLIBC hold= 6,291,456 used= 2,794,256 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 1,844,448 used= 905,967 count= 12,922 avg_used= 70 block_cnt= 234 chunk_cnt= 2 mod=Buffer [MEMORY] hold= 893,248 used= 602,446 count= 3,122 avg_used= 192 block_cnt= 193 chunk_cnt= 3 mod=glibc_malloc [MEMORY] hold= 53,600 used= 37,729 count= 229 avg_used= 164 block_cnt= 23 chunk_cnt= 2 mod=S3SDK [MEMORY] hold= 2,960 used= 1,222 count= 20 avg_used= 61 block_cnt= 7 chunk_cnt= 2 mod=XmlGlobal [MEMORY] hold= 2,794,256 used= 1,547,364 count= 16,293 avg_used= 94 mod=SUMMARY [2024-09-13 13:02:35.714670] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=12] [MEMORY] tenant_id= 500 ctx_id= CO_STACK hold= 104,857,600 used= 103,219,200 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 103,219,200 used= 103,027,200 count= 200 avg_used= 515,136 block_cnt= 200 chunk_cnt= 50 mod=CoStack [MEMORY] hold= 103,219,200 used= 103,027,200 count= 200 avg_used= 515,136 mod=SUMMARY [2024-09-13 13:02:35.714684] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=8] [MEMORY] tenant_id= 500 ctx_id= LIBEASY hold= 4,194,304 used= 3,596,256 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 3,596,256 used= 3,464,800 count= 143 avg_used= 24,229 block_cnt= 24 chunk_cnt= 2 mod=OB_TEST2_PCODE [MEMORY] hold= 3,596,256 used= 3,464,800 count= 143 avg_used= 24,229 mod=SUMMARY [2024-09-13 13:02:35.714698] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=7] [MEMORY] tenant_id= 500 ctx_id= LOGGER_CTX_ID hold= 20,971,520 used= 20,807,680 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 16,646,144 used= 16,637,952 count= 8 avg_used= 2,079,744 block_cnt= 8 chunk_cnt= 8 mod=Logger [MEMORY] hold= 4,161,536 used= 4,159,488 count= 2 avg_used= 2,079,744 block_cnt= 2 chunk_cnt= 2 mod=ErrorLogger [MEMORY] hold= 20,807,680 used= 20,797,440 count= 10 avg_used= 2,079,744 mod=SUMMARY [2024-09-13 13:02:35.714731] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=11] [MEMORY] tenant_id= 500 ctx_id= RPC_CTX_ID hold= 2,097,152 used= 24,576 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 24,576 used= 16,256 count= 1 avg_used= 16,256 block_cnt= 1 chunk_cnt= 1 mod=RpcDefault [MEMORY] hold= 24,576 used= 16,256 count= 1 avg_used= 16,256 mod=SUMMARY [2024-09-13 13:02:35.714772] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=30] [MEMORY] tenant_id= 500 ctx_id= PKT_NIO hold= 18,989,056 used= 16,342,352 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 12,754,256 used= 12,676,888 count= 27 avg_used= 469,514 block_cnt= 12 chunk_cnt= 6 mod=DEFAULT [MEMORY] hold= 1,671,168 used= 1,571,712 count= 12 avg_used= 130,976 block_cnt= 12 chunk_cnt= 3 mod=PKTS_INBUF [MEMORY] hold= 1,253,376 used= 1,178,784 count= 9 avg_used= 130,976 block_cnt= 9 chunk_cnt= 3 mod=PKTC_INBUF [MEMORY] hold= 417,792 used= 276,896 count= 17 avg_used= 16,288 block_cnt= 17 chunk_cnt= 2 mod=SERVER_CTX_CHUN [MEMORY] hold= 98,304 used= 65,152 count= 4 avg_used= 16,288 block_cnt= 4 chunk_cnt= 1 mod=SERVER_RESP_CHU [MEMORY] hold= 73,728 used= 48,864 count= 3 avg_used= 16,288 block_cnt= 3 chunk_cnt= 1 mod=CLIENT_CB_CHUNK [MEMORY] hold= 73,728 used= 48,864 count= 3 avg_used= 16,288 block_cnt= 3 chunk_cnt= 1 mod=CLIENT_REQ_CHUN [MEMORY] hold= 16,342,352 used= 15,867,160 count= 75 avg_used= 211,562 mod=SUMMARY [2024-09-13 13:02:35.714847] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=11] [MEMORY] tenant_id= 500 ctx_id= SCHEMA_SERVICE hold= 11,292,672 used= 9,801,696 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 7,098,368 used= 7,078,824 count= 1 avg_used= 7,078,824 block_cnt= 1 chunk_cnt= 1 mod=SchemaIdVersion [MEMORY] hold= 2,088,896 used= 2,087,680 count= 2 avg_used= 1,043,840 block_cnt= 2 chunk_cnt= 2 mod=TenantSchemMgr [MEMORY] hold= 294,912 used= 262,144 count= 4 avg_used= 65,536 block_cnt= 4 chunk_cnt= 1 mod=SchemaMgrCache [MEMORY] hold= 200,832 used= 197,618 count= 5 avg_used= 39,523 block_cnt= 3 chunk_cnt= 1 mod=SchemaSysCache [MEMORY] hold= 32,384 used= 31,616 count= 4 avg_used= 7,904 block_cnt= 4 chunk_cnt= 1 mod=ScheTenaInfoVec [MEMORY] hold= 16,832 used= 16,064 count= 4 avg_used= 4,016 block_cnt= 4 chunk_cnt= 1 mod=SchemaSysVariab [MEMORY] hold= 16,192 used= 15,808 count= 2 avg_used= 7,904 block_cnt= 2 chunk_cnt= 1 mod=ScheTablInfoVec [MEMORY] hold= 2,560 used= 1,024 count= 8 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabeSeCompo [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheIndeNameMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheTablIdMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=HiddenTblNames [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheRoutIdMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheRoutNameMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=ScheTablNameMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=SchePackIdMap [MEMORY] hold= 2,432 used= 2,048 count= 2 avg_used= 1,024 block_cnt= 2 chunk_cnt= 1 mod=SchePackNameMap [MEMORY] hold= 1,920 used= 768 count= 6 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabeSePolic [MEMORY] hold= 1,920 used= 768 count= 6 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabeSeLabel [MEMORY] hold= 1,408 used= 1,024 count= 2 avg_used= 512 block_cnt= 2 chunk_cnt= 1 mod=ScheUdtNameMap [MEMORY] hold= 1,408 used= 1,024 count= 2 avg_used= 512 block_cnt= 2 chunk_cnt= 1 mod=ScheUdtIdMap [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=DBLINK_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaProfile [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheLabSeUserLe [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaSynonym [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=DIRECTORY_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=RLS_POLICY_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlSqlMap [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 4 chunk_cnt= 1 mod=RLS_GROUP_MGR [MEMORY] hold= 1,280 used= 512 count= 4 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=RLS_CONTEXT_MGR [MEMORY] hold= 784 used= 584 count= 1 avg_used= 584 block_cnt= 1 chunk_cnt= 1 mod=TenSchMemMgrFoL [MEMORY] hold= 784 used= 584 count= 1 avg_used= 584 block_cnt= 1 chunk_cnt= 1 mod=TenaScheMemMgr [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaKeystore [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheDataNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheAuxVpNameVe [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheForKeyNamMa [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=MockFkParentTab [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaContext [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheConsNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlIdMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheOutlNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchePriTabPriMa [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=PRIV_ROUTINE [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchePriObjPriMa [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaTablespac [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheTrigIdMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=ScheTrigNameMap [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaUdf [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaSequence [MEMORY] hold= 640 used= 256 count= 2 avg_used= 128 block_cnt= 2 chunk_cnt= 1 mod=SchemaSecurAudi [MEMORY] hold= 9,801,696 used= 9,721,130 count= 136 avg_used= 71,478 mod=SUMMARY [2024-09-13 13:02:35.714899] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=37] [MEMORY] tenant_id= 500 ctx_id= UNEXPECTED_IN_500 hold= 202,182,656 used= 200,226,400 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 134,238,416 used= 134,217,736 count= 2 avg_used= 67,108,868 block_cnt= 2 chunk_cnt= 2 mod=CACHE_MAP_BKT [MEMORY] hold= 18,173,952 used= 18,155,880 count= 1 avg_used= 18,155,880 block_cnt= 1 chunk_cnt= 1 mod=CACHE_MB_HANDLE [MEMORY] hold= 17,338,048 used= 8,968,960 count= 1,093 avg_used= 8,205 block_cnt= 1,093 chunk_cnt= 9 mod=StorageLoggerM [MEMORY] hold= 16,807,040 used= 16,786,208 count= 3 avg_used= 5,595,402 block_cnt= 3 chunk_cnt= 2 mod=FixeSizeBlocAll [MEMORY] hold= 6,311,936 used= 6,291,472 count= 1 avg_used= 6,291,472 block_cnt= 1 chunk_cnt= 1 mod=CACHE_MAP_LOCK [MEMORY] hold= 3,698,656 used= 3,284,064 count= 76 avg_used= 43,211 block_cnt= 54 chunk_cnt= 4 mod=OccamThreadPool [MEMORY] hold= 3,592,192 used= 3,573,960 count= 1 avg_used= 3,573,960 block_cnt= 1 chunk_cnt= 1 mod=TenantConfig [MEMORY] hold= 44,480 used= 35,968 count= 2 avg_used= 17,984 block_cnt= 2 chunk_cnt= 2 mod=CommonNetwork [MEMORY] hold= 13,552 used= 1,440 count= 90 avg_used= 16 block_cnt= 3 chunk_cnt= 2 mod=ConfigChecker [MEMORY] hold= 8,128 used= 7,936 count= 1 avg_used= 7,936 block_cnt= 1 chunk_cnt= 1 mod=BlockMap [MEMORY] hold= 200,226,400 used= 191,323,624 count= 1,270 avg_used= 150,648 mod=SUMMARY [2024-09-13 13:02:35.714920] INFO [LIB] operator() (ob_malloc_allocator.cpp:519) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=12] [MEMORY] tenant: 508, limit: 1,073,741,824 hold: 23,621,632 rpc_hold: 0 cache_hold: 0 cache_used: 0 cache_item_count: 0 [MEMORY] ctx_id= DEFAULT_CTX_ID hold_bytes= 6,844,416 limit= 9,223,372,036,854,775,807 [MEMORY] ctx_id= CO_STACK hold_bytes= 16,777,216 limit= 9,223,372,036,854,775,807 [2024-09-13 13:02:35.714949] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=15] [MEMORY] tenant_id= 508 ctx_id= DEFAULT_CTX_ID hold= 6,844,416 used= 5,117,152 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 2,650,112 used= 2,631,360 count= 1 avg_used= 2,631,360 block_cnt= 1 chunk_cnt= 1 mod=RpcStatInfo [MEMORY] hold= 1,720,320 used= 1,682,640 count= 30 avg_used= 56,088 block_cnt= 30 chunk_cnt= 2 mod=[T]ObSessionDIB [MEMORY] hold= 663,552 used= 659,200 count= 1 avg_used= 659,200 block_cnt= 1 chunk_cnt= 1 mod=MulLevelQueue [MEMORY] hold= 40,960 used= 37,032 count= 1 avg_used= 37,032 block_cnt= 1 chunk_cnt= 1 mod=DynamicFactor [MEMORY] hold= 18,240 used= 12,480 count= 30 avg_used= 416 block_cnt= 6 chunk_cnt= 1 mod=OMT_Worker [MEMORY] hold= 15,840 used= 3,840 count= 60 avg_used= 64 block_cnt= 6 chunk_cnt= 1 mod=Coro [MEMORY] hold= 6,848 used= 720 count= 30 avg_used= 24 block_cnt= 6 chunk_cnt= 1 mod=[T]MemoryContex [MEMORY] hold= 1,280 used= 1,080 count= 1 avg_used= 1,080 block_cnt= 1 chunk_cnt= 1 mod=ModuleInitCtx [MEMORY] hold= 5,117,152 used= 5,028,352 count= 154 avg_used= 32,651 mod=SUMMARY [2024-09-13 13:02:35.714989] INFO [LIB] print_usage (ob_tenant_ctx_allocator.cpp:178) [19908][MemoryDump][T0][Y0-0000000000000000-0-0] [lt=11] [MEMORY] tenant_id= 508 ctx_id= CO_STACK hold= 16,777,216 used= 15,482,880 limit= 9,223,372,036,854,775,807 [MEMORY] idle_size= 0 free_size= 0 [MEMORY] wash_related_chunks= 0 washed_blocks= 0 washed_size= 0 [MEMORY] hold= 15,482,880 used= 15,454,080 count= 30 avg_used= 515,136 block_cnt= 30 chunk_cnt= 8 mod=CoStack [MEMORY] hold= 15,482,880 used= 15,454,080 count= 30 avg_used= 515,136 mod=SUMMARY [2024-09-13 13:02:35.716257] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=66][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.716805] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.716827] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.716834] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.716843] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.716856] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755716854, replica_locations:[]}) [2024-09-13 13:02:35.716883] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.716904] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:51, local_retry_times:51, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:35.716924] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.716935] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.716945] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.716955] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.716959] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:35.716996] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:35.717010] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.717055] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563594172, cache_obj->added_lc()=false, cache_obj->get_object_id()=746, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.717982] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.718010] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.718110] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.718481] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.718495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.718501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.718514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.718524] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755718523, replica_locations:[]}) [2024-09-13 13:02:35.718538] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.718546] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.718557] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.718566] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:35.718575] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:35.718581] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:35.718592] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:35.718605] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.718611] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.718618] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:35.718627] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:35.718631] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:35.718639] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:35.718650] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:35.718655] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:35.718659] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:35.718668] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:35.718673] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:35.718678] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:35.718694] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:35.718702] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.718708] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:35.718713] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:35.718718] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:35.718728] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=52, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.718742] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] will sleep(sleep_us=52000, remain_us=554930, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.728132] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:35.728244] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=25] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:35.760524] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.760555] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=29][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755760517) [2024-09-13 13:02:35.760566] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755660493, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.760586] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.760593] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.760598] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755760573) [2024-09-13 13:02:35.770995] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.771400] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.771422] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.771428] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.771445] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.771462] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755771461, replica_locations:[]}) [2024-09-13 13:02:35.771480] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.771503] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:52, local_retry_times:52, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:35.771523] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.771530] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.771544] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.771550] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.771558] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:35.771571] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:35.771580] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.771626] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563648742, cache_obj->added_lc()=false, cache_obj->get_object_id()=747, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.772633] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.772661] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.772772] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.773056] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.773070] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.773076] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.773083] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.773096] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755773095, replica_locations:[]}) [2024-09-13 13:02:35.773110] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.773118] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.773129] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.773140] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:35.773145] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:35.773150] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:35.773161] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:35.773175] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.773180] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.773185] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:35.773194] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:35.773199] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:35.773205] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:35.773215] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:35.773220] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:35.773226] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:35.773245] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:35.773251] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:35.773260] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:35.773271] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:35.773277] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.773284] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:35.773294] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:35.773300] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:35.773306] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=53, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.773325] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] will sleep(sleep_us=53000, remain_us=500347, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.807024] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=36][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:35.826569] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.827029] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.827097] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=66][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.827110] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.827128] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.827153] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755827151, replica_locations:[]}) [2024-09-13 13:02:35.827192] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=36] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.827223] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:53, local_retry_times:53, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:35.827250] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.827263] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.827279] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.827290] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.827298] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:35.827332] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:35.827348] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.827421] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563704531, cache_obj->added_lc()=false, cache_obj->get_object_id()=748, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.828701] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=42][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.828749] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=46][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.828911] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.829131] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.829149] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.829160] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.829172] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.829185] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755829184, replica_locations:[]}) [2024-09-13 13:02:35.829202] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.829214] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.829224] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.829238] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:35.829248] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:35.829258] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:35.829275] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:35.829289] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.829306] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.829317] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:35.829326] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:35.829334] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:35.829346] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:35.829358] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:35.829367] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:35.829377] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:35.829386] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:35.829394] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:35.829403] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:35.829423] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:35.829434] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.829458] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:35.829468] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:35.829478] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:35.829488] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=54, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.829516] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] will sleep(sleep_us=54000, remain_us=444156, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.844265] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.844290] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:35.844326] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:35.844334] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:35.844363] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=6] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:35.860500] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD1-0-0] [lt=22][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755860028) [2024-09-13 13:02:35.860542] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD1-0-0] [lt=33][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755860028}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.860576] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.860589] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.860597] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755860560) [2024-09-13 13:02:35.868063] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B56-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:35.868093] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B56-0-0] [lt=29][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203755867560], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:35.868531] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE6-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.869202] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE6-0-0] [lt=19][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203755868904, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035777, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203755868670}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:35.869233] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE6-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:35.872729] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=10] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.873205] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.873258] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=12] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:35.883788] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.884158] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.884194] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.884207] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.884222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.884243] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755884241, replica_locations:[]}) [2024-09-13 13:02:35.884297] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=51] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.884327] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:54, local_retry_times:54, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:35.884353] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.884365] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.884381] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.884391] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.884400] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:35.884423] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:35.884448] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.884528] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563761635, cache_obj->added_lc()=false, cache_obj->get_object_id()=749, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.885916] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.885958] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=41][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.886086] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.886303] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.886334] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.886345] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.886356] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.886372] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755886370, replica_locations:[]}) [2024-09-13 13:02:35.886387] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.886400] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.886411] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.886425] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:35.886434] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:35.886456] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:35.886474] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:35.886489] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.886507] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.886518] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:35.886527] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:35.886536] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:35.886548] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:35.886560] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:35.886569] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:35.886578] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:35.886586] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:35.886595] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:35.886605] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:35.886627] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:35.886639] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.886650] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:35.886659] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:35.886668] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:35.886677] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=55, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.886712] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20] will sleep(sleep_us=55000, remain_us=386961, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.895730] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=28] Cache replace map node details(ret=0, replace_node_count=0, replace_time=2669, replace_start_pos=566226, replace_num=62914) [2024-09-13 13:02:35.895752] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:35.905019] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=70] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:35.911787] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=75] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=6669, clean_start_pos=1132461, clean_num=125829) [2024-09-13 13:02:35.941947] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.942330] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.942357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.942369] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.942418] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=47] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.942447] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755942446, replica_locations:[]}) [2024-09-13 13:02:35.942471] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:35.942501] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:55, local_retry_times:55, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:35.942563] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=56][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:35.942580] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:35.942595] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.942604] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:35.942612] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:35.942640] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:35.942655] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:35.942734] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563819842, cache_obj->added_lc()=false, cache_obj->get_object_id()=750, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:35.944051] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.944093] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=41][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.944220] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:35.944495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.944514] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:35.944524] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:35.944536] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:35.944562] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203755944561, replica_locations:[]}) [2024-09-13 13:02:35.944598] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=33][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.944612] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:35.944623] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:35.944637] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:35.944668] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=30][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:35.944683] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:35.944704] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:35.944718] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.944741] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:35.944759] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:35.944769] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:35.944778] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:35.944798] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:35.944810] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:35.944819] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:35.944827] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:35.944835] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:35.944844] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:35.944852] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:35.944870] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=7][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:35.944907] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=34][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:35.944920] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:35.944930] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:35.944940] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:35.944949] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=56, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:35.944970] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] will sleep(sleep_us=56000, remain_us=328702, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:35.960554] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD2-0-0] [lt=39][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755960132) [2024-09-13 13:02:35.960584] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD2-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203755960132}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:35.960605] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:35.960619] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:35.960634] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203755960599) [2024-09-13 13:02:35.960646] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755760572, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:35.960667] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.960677] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:35.960682] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203755960657) [2024-09-13 13:02:36.001217] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.001551] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.001579] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.001609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.001646] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=35] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.001665] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756001664, replica_locations:[]}) [2024-09-13 13:02:36.001683] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.001703] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:56, local_retry_times:56, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.001722] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.001733] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.001745] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.001753] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.001761] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.001791] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.001812] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.001861] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563878978, cache_obj->added_lc()=false, cache_obj->get_object_id()=751, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.003052] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.003294] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.003316] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.003328] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.003339] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.003352] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756003351, replica_locations:[]}) [2024-09-13 13:02:36.003420] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=270253, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:36.047072] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=28][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.060674] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.060726] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=34][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756060664) [2024-09-13 13:02:36.060742] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203755960655, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.060739] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD3-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756060207) [2024-09-13 13:02:36.060773] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.060757] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD3-0-0] [lt=17][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203756060207}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:36.060785] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.060793] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756060759) [2024-09-13 13:02:36.060806] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.060815] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.060811] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.060824] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756060803) [2024-09-13 13:02:36.061057] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.061085] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.061097] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.061110] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.061125] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756061124, replica_locations:[]}) [2024-09-13 13:02:36.061171] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=43] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.061202] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.061213] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.061247] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.061296] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563938412, cache_obj->added_lc()=false, cache_obj->get_object_id()=752, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.062490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.062780] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.062805] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.062815] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.062828] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.062842] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756062841, replica_locations:[]}) [2024-09-13 13:02:36.062915] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=210757, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:36.093053] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=35] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.093805] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=12] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.093809] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=19] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.094240] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=11] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.094710] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.094893] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=17] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.095335] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=23] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.095565] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.095835] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=7] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:36.095940] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.112250] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=56] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14018152858, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:36.112803] INFO [COMMON] wash (ob_kvcache_store.cpp:342) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=37] Wash time detail, (compute_wash_size_time=189, refresh_score_time=64, wash_time=0) [2024-09-13 13:02:36.113937] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=39][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.119672] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=36] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:36.121202] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.121596] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.121625] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.121668] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=42] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.121697] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.121715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756121713, replica_locations:[]}) [2024-09-13 13:02:36.121732] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.121756] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.121767] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.121805] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.121861] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6563998978, cache_obj->added_lc()=false, cache_obj->get_object_id()=753, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.123076] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.123273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.123295] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.123306] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.123317] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.123330] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756123329, replica_locations:[]}) [2024-09-13 13:02:36.123409] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=150264, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:36.126425] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:138) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C80-0-0] [lt=5][errcode=-4002] invalid argument, empty server list(ret=-4002) [2024-09-13 13:02:36.126453] WDIAG [SHARE] refresh (ob_alive_server_tracer.cpp:380) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C80-0-0] [lt=27][errcode=-4002] refresh sever list failed(ret=-4002) [2024-09-13 13:02:36.126458] WDIAG [SHARE] runTimerTask (ob_alive_server_tracer.cpp:255) [19881][ServerTracerTim][T0][YB42AC103323-000621F921860C80-0-0] [lt=5][errcode=-4002] refresh alive server list failed(ret=-4002) [2024-09-13 13:02:36.138824] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=4] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:36.138852] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=25][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:36.138864] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=11][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:36.138887] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=21][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:36.138900] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=10][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:36.138909] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=9][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:36.138921] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=8][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:36.138931] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=9][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:36.138939] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=7][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:36.138944] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:36.138949] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:36.138955] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=5][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:36.138961] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=5][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:36.138971] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=9][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:36.138987] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:36.138996] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=8][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.139007] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=9][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.139018] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=10][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:36.139027] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=8][errcode=-5019] fail to handle text query(stmt=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server, ret=-5019) [2024-09-13 13:02:36.139037] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=9][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:36.139048] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=10][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.139064] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=13][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:36.139083] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=15][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.139093] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=9][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.139098] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:36.139111] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:36.139124] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C81-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.139136] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19878][ServerGTimer][T0][YB42AC103323-000621F921960C81-0-0] [lt=10][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server"}, aret=-5019, ret=-5019) [2024-09-13 13:02:36.139146] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:36.139159] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:36.139167] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:36.139178] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203756138663, sql=SELECT *, time_to_usec(gmt_modified) AS last_hb_time FROM __all_server) [2024-09-13 13:02:36.139189] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:36.139214] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC85-0-0] [lt=22][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:36.139280] WDIAG [SHARE] refresh (ob_all_server_tracer.cpp:568) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] fail to get servers_info(ret=-5019, ret="OB_TABLE_NOT_EXIST", GCTX.sql_proxy_=0x55a386ae7408) [2024-09-13 13:02:36.139287] WDIAG [SHARE] runTimerTask (ob_all_server_tracer.cpp:626) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] fail to refresh all server map(ret=-5019) [2024-09-13 13:02:36.160887] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.160912] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756160871) [2024-09-13 13:02:36.160922] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756060756, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.160942] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.160950] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.160955] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756160928) [2024-09-13 13:02:36.178665] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=26] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:36.179680] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.182666] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.183039] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.183067] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.183079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.183092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.183109] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756183108, replica_locations:[]}) [2024-09-13 13:02:36.183156] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=44] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.183181] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.183192] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.183216] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.183261] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564060378, cache_obj->added_lc()=false, cache_obj->get_object_id()=754, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.184528] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.184840] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.184900] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=58][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.184919] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.184935] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.184950] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756184949, replica_locations:[]}) [2024-09-13 13:02:36.185002] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=88670, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:36.197573] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=27] PNIO [ratelimit] time: 1726203756197572, bytes: 4442008, bw: 0.050895 MB/s, add_ts: 1007615, add_bytes: 53774 [2024-09-13 13:02:36.203568] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.204210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.205315] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.206175] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=35] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:36.206946] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.208125] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.210811] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.210904] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C8A-0-0] [lt=5] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.211846] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.215461] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.216241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782EA-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.216788] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.218216] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=31] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:36.221376] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.222321] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.227749] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.228219] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=21] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:36.228340] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=16] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:36.228708] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.229574] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=20] gc stale ls task succ [2024-09-13 13:02:36.229973] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=12] ====== check clog disk timer task ====== [2024-09-13 13:02:36.229990] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=15] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:36.230002] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=7] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:36.234784] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=23] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:36.235196] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.236124] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.238342] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=19] table not exist(tenant_id=1, database_id=201001, table_name=__all_disk_io_calibration, table_name.ptr()="data_size:25, data:5F5F616C6C5F6469736B5F696F5F63616C6962726174696F6E", ret=-5019) [2024-09-13 13:02:36.238365] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_disk_io_calibration, ret=-5019) [2024-09-13 13:02:36.238372] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_disk_io_calibration, db_name=oceanbase) [2024-09-13 13:02:36.238383] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=10][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_disk_io_calibration) [2024-09-13 13:02:36.238391] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:36.238396] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:36.238402] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_disk_io_calibration' doesn't exist [2024-09-13 13:02:36.238407] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:36.238411] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:36.238415] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:36.238420] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:36.238427] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:36.238432] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:36.238454] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=21][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:36.238469] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=8][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:36.238476] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.238486] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=8][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.238493] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:36.238502] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA", ret=-5019) [2024-09-13 13:02:36.238510] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:36.238522] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=11][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA""}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.238536] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:36.238551] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.238556] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.238561] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=5][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:36.238575] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA""}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:36.238588] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19878][ServerGTimer][T1][YB42AC103323-000621F921960C82-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.238600] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19878][ServerGTimer][T0][YB42AC103323-000621F921960C82-0-0] [lt=11][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA""}, aret=-5019, ret=-5019) [2024-09-13 13:02:36.238609] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:36.238614] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:36.238618] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:36.238626] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e06e0, start=1726203756238164, sql=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:36.238636] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:36.238641] WDIAG [COMMON] parse_calibration_table (ob_io_calibration.cpp:829) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] query failed(ret=-5019, sql_string=select mode, size, latency, iops from __all_disk_io_calibration where svr_ip = "172.16.51.35" and svr_port = 2882 and storage_name = "DATA") [2024-09-13 13:02:36.238694] WDIAG [COMMON] read_from_table (ob_io_calibration.cpp:699) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] parse calibration data failed(ret=-5019) [2024-09-13 13:02:36.238703] WDIAG [SERVER] refresh_io_calibration (ob_server.cpp:3477) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] fail to refresh io calibration from table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.238710] WDIAG [SERVER] runTimerTask (ob_server.cpp:3467) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] ObRefreshIOCalibrationTimeTask task failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.239211] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:36.239224] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:36.239230] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:36.239239] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:36.242529] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=55, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:36.243529] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=15] table not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, table_name.ptr()="data_size:16, data:5F5F616C6C5F6D657267655F696E666F", ret=-5019) [2024-09-13 13:02:36.243550] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=20][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_merge_info, ret=-5019) [2024-09-13 13:02:36.243557] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_merge_info, db_name=oceanbase) [2024-09-13 13:02:36.243564] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_merge_info) [2024-09-13 13:02:36.243570] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:36.243574] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:36.243579] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_merge_info' doesn't exist [2024-09-13 13:02:36.243583] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=3][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:36.243609] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=25][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:36.243613] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:36.243618] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:36.243625] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:36.243617] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.243629] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:36.243636] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:36.243648] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:36.243655] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.243661] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.243668] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:36.243673] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_merge_info WHERE tenant_id = '1', ret=-5019) [2024-09-13 13:02:36.243677] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=3][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:36.243690] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=12][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.243702] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=9][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:36.243715] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.243719] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.243723] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:36.243733] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:36.243741] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.243746] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C83-0-0] [lt=4][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_merge_info WHERE tenant_id = '1'"}, aret=-5019, ret=-5019) [2024-09-13 13:02:36.243754] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:36.243759] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:36.243764] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:36.243776] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203756243371, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:36.243782] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:36.243787] WDIAG [SHARE] load_global_merge_info (ob_global_merge_table_operator.cpp:49) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, meta_tenant_id=1, sql=SELECT * FROM __all_merge_info WHERE tenant_id = '1') [2024-09-13 13:02:36.243836] WDIAG [STORAGE] refresh_merge_info (ob_tenant_freeze_info_mgr.cpp:890) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] failed to load global merge info(ret=-5019, ret="OB_TABLE_NOT_EXIST", global_merge_info={tenant_id:1, cluster:{name:"cluster", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, frozen_scn:{name:"frozen_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, global_broadcast_scn:{name:"global_broadcast_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, last_merged_scn:{name:"last_merged_scn", is_scn:true, scn:{val:1, v:0}, value:-1, need_update:false}, is_merge_error:{name:"is_merge_error", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_status:{name:"merge_status", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, error_type:{name:"error_type", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, suspend_merging:{name:"suspend_merging", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, merge_start_time:{name:"merge_start_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}, last_merged_time:{name:"last_merged_time", is_scn:false, scn:{val:18446744073709551615, v:3}, value:0, need_update:false}}) [2024-09-13 13:02:36.243865] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1005) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=29][errcode=-5019] fail to refresh merge info(tmp_ret=-5019, tmp_ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.243889] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=0] server is initiating(server_id=0, local_seq=56, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:36.244682] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.245258] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.245712] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.245731] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.245764] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.245775] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.245801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.245826] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756245824, replica_locations:[]}) [2024-09-13 13:02:36.245834] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.245850] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.245882] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.245893] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.245918] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.245973] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564123090, cache_obj->added_lc()=false, cache_obj->get_object_id()=755, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.246024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.246043] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.246059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.246069] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.246081] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756246080, replica_locations:[]}) [2024-09-13 13:02:36.246127] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1997743, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.246221] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.246361] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.246371] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.246375] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.246385] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.246394] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756246394, replica_locations:[]}) [2024-09-13 13:02:36.246405] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.246426] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.246432] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.246472] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.246504] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564123623, cache_obj->added_lc()=false, cache_obj->get_object_id()=756, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.247126] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.247461] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.247960] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.248016] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.248027] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.248039] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.248049] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.248059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.248063] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.248073] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.248088] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756248088, replica_locations:[]}) [2024-09-13 13:02:36.248101] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.248086] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756248085, replica_locations:[]}) [2024-09-13 13:02:36.248145] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0] will sleep(sleep_us=25527, remain_us=25527, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203756273672) [2024-09-13 13:02:36.248166] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1995705, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.248480] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.249186] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.249390] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.249551] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.249592] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.249609] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.249616] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.249623] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.249638] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756249637, replica_locations:[]}) [2024-09-13 13:02:36.249652] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.249671] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.249679] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.249707] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.249740] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564126859, cache_obj->added_lc()=false, cache_obj->get_object_id()=758, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.249832] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.250659] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.250886] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.250902] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.250913] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.250920] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.250929] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756250928, replica_locations:[]}) [2024-09-13 13:02:36.250965] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=2000, remain_us=1992906, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.253132] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.253164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=35][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.253379] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.253391] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.253410] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.253417] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.253429] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756253428, replica_locations:[]}) [2024-09-13 13:02:36.253448] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.253463] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.253470] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.253492] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.253522] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564130643, cache_obj->added_lc()=false, cache_obj->get_object_id()=759, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.254180] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.254333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.254500] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.254517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.254536] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.254546] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.254557] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756254557, replica_locations:[]}) [2024-09-13 13:02:36.254594] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1989276, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.257780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.257837] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:36.257855] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=17][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:36.257863] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:36.257871] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=7][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:36.257889] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=16][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:36.257893] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:36.257901] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=5][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:36.257909] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=7][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:36.257913] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:36.257917] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:36.257923] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=5][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:36.257927] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:36.257934] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=6][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:36.257940] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:36.257949] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=8][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:36.257956] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:36.257963] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:36.257972] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:36.257977] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.257984] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.257991] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:36.257996] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:36.258001] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:36.258005] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.258011] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.258015] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=7][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:36.258025] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=8][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.258025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.258031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.258038] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.258041] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.258042] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:36.258051] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=4][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:36.258050] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756258049, replica_locations:[]}) [2024-09-13 13:02:36.258060] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C83-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.258066] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C83-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:36.258060] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.258075] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:36.258080] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:36.258084] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.258087] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:36.258089] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.258092] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203756257619, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:36.258099] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:36.258101] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.258107] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:36.258117] INFO [STORAGE.TRANS] print_stat_ (ob_black_list.cpp:398) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=8] start to print blacklist info [2024-09-13 13:02:36.258137] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564135257, cache_obj->added_lc()=false, cache_obj->get_object_id()=760, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.258173] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=9] ls blacklist refresh finish(cost_time=1349) [2024-09-13 13:02:36.258957] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.259195] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.259211] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.259217] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.259227] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.259236] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756259235, replica_locations:[]}) [2024-09-13 13:02:36.259276] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1984595, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.260834] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD4-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756260357) [2024-09-13 13:02:36.260863] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD4-0-0] [lt=23][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203756260357}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:36.260899] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.260914] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.260927] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756260886) [2024-09-13 13:02:36.263492] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.263624] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.263758] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.263777] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.263784] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.263792] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.263801] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756263801, replica_locations:[]}) [2024-09-13 13:02:36.263814] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.263833] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.263841] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.263867] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.263905] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564141025, cache_obj->added_lc()=false, cache_obj->get_object_id()=761, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.264673] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:36.264686] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.264754] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.264962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.264978] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.264984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.264990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.264998] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756264997, replica_locations:[]}) [2024-09-13 13:02:36.265032] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1978838, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.265838] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=42] PNIO [ratelimit] time: 1726203756265836, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007623, add_bytes: 0 [2024-09-13 13:02:36.266249] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C8E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.266482] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.266499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.266506] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.266516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.266552] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=57, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:36.267431] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:36.267465] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=32][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:36.267472] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:36.267481] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:36.267487] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:36.267491] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:36.267497] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:36.267501] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=3][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:36.267505] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:36.267509] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:36.267513] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:36.267516] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=3][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:36.267520] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:36.267524] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:36.267531] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:36.267539] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.267543] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=3][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.267550] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:36.267554] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:36.267561] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=6][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:36.267568] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.267580] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=9][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:36.267589] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.267596] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.267599] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:36.267617] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:36.267625] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.267669] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=42][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:36.267677] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:36.267683] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:36.267691] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:36.267696] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=5][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203756267327, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:36.267706] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:36.267711] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:36.267760] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:36.267768] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=7][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:36.267773] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.267778] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.267783] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=3][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:36.267790] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=6][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:36.267795] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8E-0-0] [lt=5][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.270216] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.270482] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.270499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.270505] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.270512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.270535] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756270535, replica_locations:[]}) [2024-09-13 13:02:36.270548] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.270567] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.270575] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.270591] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.270620] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564147741, cache_obj->added_lc()=false, cache_obj->get_object_id()=762, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.271412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.271636] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.271652] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.271658] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.271668] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.271689] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756271688, replica_locations:[]}) [2024-09-13 13:02:36.271727] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1972144, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.273775] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203756273672, ctx_timeout_ts=1726203756273672, worker_timeout_ts=1726203756273672, default_timeout=1000000) [2024-09-13 13:02:36.273801] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=25][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:36.273811] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:36.273841] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=29] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.273860] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:36.273901] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.273914] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.273933] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.273977] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564151092, cache_obj->added_lc()=false, cache_obj->get_object_id()=757, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.274719] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203756273672, ctx_timeout_ts=1726203756273672, worker_timeout_ts=1726203756273672, default_timeout=1000000) [2024-09-13 13:02:36.274745] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=26][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:36.274756] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:36.274768] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.274778] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.274812] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=33][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:36.274842] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:36.274856] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.274865] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.274923] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=40] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:36.274968] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:36.274999] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=16][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:36.275012] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.275023] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=8] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000559) [2024-09-13 13:02:36.275033] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=10][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:36.275044] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=9][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.275058] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:36.275070] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:36.275084] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=13][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:36.275097] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.275102] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.275174] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19944][SerScheQueue0][T1][YB42AC103323-000621F921460C84-0-0] [lt=44][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564152293, cache_obj->added_lc()=false, cache_obj->get_object_id()=764, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.275236] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=22][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:36.275248] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:36.275256] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:36.275266] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:36.275306] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=38][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:36.275323] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=15][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:36.275338] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=14] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001668) [2024-09-13 13:02:36.275353] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=14][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:36.275366] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=12] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001706) [2024-09-13 13:02:36.275378] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=11][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:36.275387] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=9] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:36.275395] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19944][SerScheQueue0][T0][YB42AC103323-000621F921460C84-0-0] [lt=8][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:36.275405] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:36.275414] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19944][SerScheQueue0][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:36.275445] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=6] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:36.275462] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=15] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:36.276139] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.277042] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.277174] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.277391] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.277409] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.277416] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.277430] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.277456] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756277455, replica_locations:[]}) [2024-09-13 13:02:36.277510] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1997959, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.277593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.277795] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.277812] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.277820] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.277833] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.277848] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756277847, replica_locations:[]}) [2024-09-13 13:02:36.277862] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.277890] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.277893] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.277899] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.277927] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.277967] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564155083, cache_obj->added_lc()=false, cache_obj->get_object_id()=765, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.278045] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.278061] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.278068] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.278078] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.278088] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756278087, replica_locations:[]}) [2024-09-13 13:02:36.278100] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.278117] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.278122] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.278146] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.278170] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564155291, cache_obj->added_lc()=false, cache_obj->get_object_id()=763, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.278895] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.278980] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.279129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.279150] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.279161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.279174] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.279185] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.279189] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756279189, replica_locations:[]}) [2024-09-13 13:02:36.279206] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.279218] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.279225] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.279230] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1996239, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.279233] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756279232, replica_locations:[]}) [2024-09-13 13:02:36.279266] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=7000, remain_us=1964604, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.279667] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.279917] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.279931] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.279937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.279944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.279951] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.279957] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:36.279964] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:36.279969] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:36.280057] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.280234] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.280246] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.280251] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.280257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.280264] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756280264, replica_locations:[]}) [2024-09-13 13:02:36.280274] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.280283] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:36.280406] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.280575] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.280589] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.280599] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.280609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.280621] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:36.280624] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756280623, replica_locations:[]}) [2024-09-13 13:02:36.280634] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4638] [2024-09-13 13:02:36.280639] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.280658] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.280665] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.280682] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.280714] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564157833, cache_obj->added_lc()=false, cache_obj->get_object_id()=766, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.280747] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.280954] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.280970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281009] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.281033] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281042] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:36.281051] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:36.281057] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:36.281146] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.281298] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281333] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281344] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281354] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.281361] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281370] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:36.281378] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:36.281385] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:36.281472] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.281595] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.281652] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281667] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281676] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281686] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.281694] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281703] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:36.281711] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:36.281717] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:36.281724] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:36.281732] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:36.281738] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:36.281756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281777] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.281790] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.281801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.281812] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756281812, replica_locations:[]}) [2024-09-13 13:02:36.281859] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1993611, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.284054] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.284273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.284285] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.284291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.284297] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.284305] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756284304, replica_locations:[]}) [2024-09-13 13:02:36.284316] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.284332] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.284340] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.284358] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.284383] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564161504, cache_obj->added_lc()=false, cache_obj->get_object_id()=768, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.285062] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.285260] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.285275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.285281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.285291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.285299] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756285298, replica_locations:[]}) [2024-09-13 13:02:36.285336] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1990133, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.286455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.286615] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.286628] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.286639] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.286645] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.286652] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756286652, replica_locations:[]}) [2024-09-13 13:02:36.286664] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.286680] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.286688] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.286707] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.286736] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564163856, cache_obj->added_lc()=false, cache_obj->get_object_id()=767, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.287559] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.287713] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.287911] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.287932] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.287938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.287945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.287952] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756287952, replica_locations:[]}) [2024-09-13 13:02:36.287983] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1955887, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.288508] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.288535] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.288730] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.288742] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.288748] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.288754] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.288761] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756288760, replica_locations:[]}) [2024-09-13 13:02:36.288773] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.288785] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.288791] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.288803] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.288829] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564165950, cache_obj->added_lc()=false, cache_obj->get_object_id()=769, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.289476] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.289654] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.289674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.289685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.289696] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.289705] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756289704, replica_locations:[]}) [2024-09-13 13:02:36.289739] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1985731, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.293923] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.294129] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.294141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.294147] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.294153] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.294162] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756294161, replica_locations:[]}) [2024-09-13 13:02:36.294174] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.294186] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.294191] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.294209] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.294234] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564171355, cache_obj->added_lc()=false, cache_obj->get_object_id()=771, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.294888] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.295108] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.295123] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.295129] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.295136] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.295144] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756295143, replica_locations:[]}) [2024-09-13 13:02:36.295177] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1980293, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.295918] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=14] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:36.296148] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.296323] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.296338] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.296343] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.296352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.296361] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756296360, replica_locations:[]}) [2024-09-13 13:02:36.296378] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.296391] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.296399] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.296410] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.296452] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564173556, cache_obj->added_lc()=false, cache_obj->get_object_id()=770, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.297074] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.297244] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.297261] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.297267] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.297276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.297286] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756297286, replica_locations:[]}) [2024-09-13 13:02:36.297325] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1946546, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.300359] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.300688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.300707] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.300713] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.300720] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.300733] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756300733, replica_locations:[]}) [2024-09-13 13:02:36.300743] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.300758] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.300766] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.300780] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.300806] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564177928, cache_obj->added_lc()=false, cache_obj->get_object_id()=772, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.300979] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.301504] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.301732] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.301749] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.301756] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.301762] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.301771] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756301770, replica_locations:[]}) [2024-09-13 13:02:36.301807] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=6000, remain_us=1973663, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.301997] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.306522] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.306780] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.306808] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.306819] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.306834] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.306847] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756306846, replica_locations:[]}) [2024-09-13 13:02:36.306866] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.306898] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.306904] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.306923] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.306954] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564184074, cache_obj->added_lc()=false, cache_obj->get_object_id()=773, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.307750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.308010] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.308013] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.308025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.308031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.308038] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.308045] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756308044, replica_locations:[]}) [2024-09-13 13:02:36.308083] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1935788, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.308165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.308180] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.308186] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.308193] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.308201] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756308201, replica_locations:[]}) [2024-09-13 13:02:36.308213] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.308229] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.308234] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.308257] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.308284] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564185405, cache_obj->added_lc()=false, cache_obj->get_object_id()=774, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.308913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.309131] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.309149] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.309155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.309162] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.309169] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756309169, replica_locations:[]}) [2024-09-13 13:02:36.309203] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1966266, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.313158] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:36.315455] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.316362] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.316471] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.316623] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.316645] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.316656] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.316672] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.316683] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756316682, replica_locations:[]}) [2024-09-13 13:02:36.316697] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.316717] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.316725] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.316740] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.316771] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564193892, cache_obj->added_lc()=false, cache_obj->get_object_id()=776, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.317506] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.317846] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.317865] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.317872] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.317888] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.317896] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756317895, replica_locations:[]}) [2024-09-13 13:02:36.317932] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1957538, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.318251] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.318517] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.318544] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.318558] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.318580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.318596] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756318596, replica_locations:[]}) [2024-09-13 13:02:36.318609] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.318628] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.318636] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.318649] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.318682] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564195801, cache_obj->added_lc()=false, cache_obj->get_object_id()=775, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.319532] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.319800] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.319823] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.319829] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.319845] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.319903] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756319902, replica_locations:[]}) [2024-09-13 13:02:36.319943] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=11000, remain_us=1923928, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.321673] INFO [SHARE] blacklist_loop_ (ob_server_blacklist.cpp:313) [20019][Blacklist][T0][Y0-0000000000000000-0-0] [lt=18] blacklist_loop exec finished(cost_time=17, is_enabled=true, send_cnt=0) [2024-09-13 13:02:36.326105] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.326342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.326360] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.326366] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.326373] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.326381] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756326381, replica_locations:[]}) [2024-09-13 13:02:36.326394] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.326411] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.326419] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.326434] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.326474] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564203595, cache_obj->added_lc()=false, cache_obj->get_object_id()=777, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.327223] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.327402] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.327417] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.327426] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.327456] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.327467] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756327466, replica_locations:[]}) [2024-09-13 13:02:36.327502] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1947968, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.330951] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.331115] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.331309] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.331327] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.331334] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.331342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.331350] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756331350, replica_locations:[]}) [2024-09-13 13:02:36.331363] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.331387] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.331396] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.331413] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.331461] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564208574, cache_obj->added_lc()=false, cache_obj->get_object_id()=778, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.331942] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.332331] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.332518] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.332542] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.332551] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.332562] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.332571] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756332570, replica_locations:[]}) [2024-09-13 13:02:36.332608] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1911263, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.336689] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.336887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.336907] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.336914] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.336924] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.336935] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756336934, replica_locations:[]}) [2024-09-13 13:02:36.336948] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.336963] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.336971] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.336994] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.337027] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564214148, cache_obj->added_lc()=false, cache_obj->get_object_id()=779, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.337749] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.337987] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.338006] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.338013] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.338023] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.338033] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756338032, replica_locations:[]}) [2024-09-13 13:02:36.338068] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=10000, remain_us=1937402, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.344795] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.345064] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.345096] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.345102] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.345110] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.345122] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756345122, replica_locations:[]}) [2024-09-13 13:02:36.345135] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.345152] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.345159] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.345175] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.345204] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564222325, cache_obj->added_lc()=false, cache_obj->get_object_id()=780, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.345229] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.345270] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:36.345289] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=18] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:36.345286] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CD9-0-0] [lt=28][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203756345249}) [2024-09-13 13:02:36.345304] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:36.345914] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.346117] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.346135] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.346142] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.346149] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.346156] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756346156, replica_locations:[]}) [2024-09-13 13:02:36.346190] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1897680, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.347347] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.348233] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.348416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.348520] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.348538] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.348544] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.348554] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.348562] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756348562, replica_locations:[]}) [2024-09-13 13:02:36.348575] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.348593] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.348601] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.348617] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.348654] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564225773, cache_obj->added_lc()=false, cache_obj->get_object_id()=781, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.349395] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.349491] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=20] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:36.349606] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.349624] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.349630] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.349639] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.349650] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756349649, replica_locations:[]}) [2024-09-13 13:02:36.349687] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1925782, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.359394] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.359909] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.359932] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.359939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.359950] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.359972] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756359972, replica_locations:[]}) [2024-09-13 13:02:36.359986] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.360006] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.360014] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.360035] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.360071] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564237190, cache_obj->added_lc()=false, cache_obj->get_object_id()=782, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.360923] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.360937] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.360949] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.360957] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756360929) [2024-09-13 13:02:36.360967] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756160928, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.360988] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.360999] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.361007] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756360975) [2024-09-13 13:02:36.361373] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.361392] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.361403] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.361403] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.361414] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.361420] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.361427] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.361434] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.361457] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756361457, replica_locations:[]}) [2024-09-13 13:02:36.361452] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=33] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756361451, replica_locations:[]}) [2024-09-13 13:02:36.361475] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.361497] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.361506] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.361510] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1882360, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.361545] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.361588] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564238707, cache_obj->added_lc()=false, cache_obj->get_object_id()=783, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.362412] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.362594] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.362632] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.362647] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.362662] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.362679] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756362678, replica_locations:[]}) [2024-09-13 13:02:36.362724] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1912746, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.365107] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.366322] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.367724] INFO [STORAGE] runTimerTask (ob_tenant_memory_printer.cpp:32) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7] === Run print tenant memory usage task === [2024-09-13 13:02:36.367768] INFO [STORAGE] print_tenant_usage (ob_tenant_memory_printer.cpp:102) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=20] ====== tenants memory info ====== === TENANTS MEMORY INFO === divisive_memory_used= 49,065,984 [TENANT_MEMORY] tenant_id= 500 mem_tenant_limit= 9,223,372,036,854,775,807 mem_tenant_hold= 542,306,304 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 508 mem_tenant_limit= 1,073,741,824 mem_tenant_hold= 23,621,632 kv_cache_mem= 0 [TENANT_MEMORY] tenant_id= 1 now= 1,726,203,756,366,827 active_memstore_used= 0 total_memstore_used= 0 total_memstore_hold= 0 memstore_freeze_trigger_limit= 257,698,020 memstore_limit= 1,288,490,160 mem_tenant_limit= 3,221,225,472 mem_tenant_hold= 355,610,624 max_mem_memstore_can_get_now= 0 memstore_alloc_pos= 0 memstore_frozen_pos= 0 memstore_reclaimed_pos= 0 [2024-09-13 13:02:36.367977] INFO [STORAGE] print_tenant_usage (ob_tenant_memory_printer.cpp:114) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=22] [CHUNK_MGR] limit= 17,179,869,184 hold= 923,635,712 total_hold= 987,758,592 used= 921,538,560 freelists_hold= 2,097,152 total_maps= 294 total_unmaps= 3 large_maps= 39 large_unmaps= 0 huge_maps= 6 huge_unmaps= 3 memalign=0 resident_size= 947,052,544 virtual_memory_used= 1,835,302,912 [CHUNK_MGR] 2 MB_CACHE: hold= 2,097,152 free= 1 pushes= 1,476 pops= 1,475 maps= 249 unmaps= 0 [CHUNK_MGR] 4 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 19 unmaps= 0 [CHUNK_MGR] 6 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 11 unmaps= 0 [CHUNK_MGR] 8 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 2 unmaps= 0 [CHUNK_MGR] 10 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 4 unmaps= 0 [CHUNK_MGR] 12 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 0 unmaps= 0 [CHUNK_MGR] 14 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 1 unmaps= 0 [CHUNK_MGR] 16 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 0 unmaps= 0 [CHUNK_MGR] 18 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 2 unmaps= 0 [CHUNK_MGR] 20 MB_CACHE: hold= 0 free= 0 pushes= 0 pops= 0 maps= 0 unmaps= 0 [2024-09-13 13:02:36.368077] INFO print (ob_malloc_time_monitor.cpp:39) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=74] [MALLOC_TIME_MONITOR] show the distribution of ob_malloc's cost_time [MALLOC_TIME_MONITOR] [ 0, 10): delta_total_cost_time= 2231553, delta_count= 18565340, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 10, 100): delta_total_cost_time= 30317, delta_count= 1786, avg_cost_time= 16 [MALLOC_TIME_MONITOR] [ 100, 1000): delta_total_cost_time= 7421, delta_count= 27, avg_cost_time= 274 [MALLOC_TIME_MONITOR] [ 1000, 10000): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 10000, 100000): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 100000, 1000000): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [MALLOC_TIME_MONITOR] [ 1000000, 9223372036854775807): delta_total_cost_time= 0, delta_count= 0, avg_cost_time= 0 [2024-09-13 13:02:36.368509] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B57-0-0] [lt=11] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:36.368527] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B57-0-0] [lt=16][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203756368080], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:36.369045] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE7-0-0] [lt=11][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203756368616, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035787, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203756368046}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:36.369073] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE7-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:36.369546] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE7-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:36.372772] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_unit, table_name.ptr()="data_size:10, data:5F5F616C6C5F756E6974", ret=-5019) [2024-09-13 13:02:36.372798] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=23][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_unit, ret=-5019) [2024-09-13 13:02:36.372806] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_unit, db_name=oceanbase) [2024-09-13 13:02:36.372812] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_unit) [2024-09-13 13:02:36.372819] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:36.372824] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:36.372830] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] Table 'oceanbase.__all_unit' doesn't exist [2024-09-13 13:02:36.372836] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:36.372840] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:36.372844] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:36.372849] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:36.372854] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=4][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:36.372858] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:36.372863] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:36.372871] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:36.372899] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=27][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.372905] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.372909] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:36.372913] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1), ret=-5019) [2024-09-13 13:02:36.372921] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:36.372927] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=5][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * from __all_unit where resource_pool_id in (select resource_pool_id from __all_resource_pool where tenant_id = 1)"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.372939] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=11][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:36.372952] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:36.372957] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=4][errcode=-5019] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.372970] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.373020] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:109) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7F-0-0] [lt=0] refresh tenant units(sys_unit_cnt=0, units=[], ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:36.373765] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=11] table not exist(tenant_id=1, database_id=201001, table_name=__all_tenant, table_name.ptr()="data_size:12, data:5F5F616C6C5F74656E616E74", ret=-5019) [2024-09-13 13:02:36.373815] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20103][OmtNodeBalancer][T1][YB42AC103323-000621F920760C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.373871] INFO [SERVER] cal_all_part_disk_default_percentage (ob_server_utils.cpp:301) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7F-0-0] [lt=0] cal_all_part_disk_default_percentage succ(data_dir="/data1/oceanbase/data/sstable", clog_dir="/data1/oceanbase/data/clog", shared_mode=true, data_disk_total_size=300808052736, data_disk_default_percentage=60, clog_disk_total_size=300808052736, clog_disk_default_percentage=30) [2024-09-13 13:02:36.373895] INFO [SERVER] decide_disk_size (ob_server_utils.cpp:337) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7F-0-0] [lt=24] decide disk size finished(suggested_disk_size=21474836480, suggested_disk_percentage=0, default_disk_percentage=30, total_space=300808052736, disk_size=21474836480) [2024-09-13 13:02:36.373902] INFO [SERVER] get_log_disk_info_in_config (ob_server_utils.cpp:88) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7F-0-0] [lt=7] get_log_disk_info_in_config(suggested_data_disk_size=21474836480, suggested_clog_disk_size=21474836480, suggested_data_disk_percentage=0, suggested_clog_disk_percentage=0, log_disk_size=21474836480, log_disk_percentage=0, total_log_disk_size=300808052736) [2024-09-13 13:02:36.373911] INFO [CLOG] try_resize (ob_server_log_block_mgr.cpp:800) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7F-0-0] [lt=7] try_resize success(ret=0, log_disk_size=21474836480, total_log_disk_size=300808052736, this={dir::"/data1/oceanbase/data/clog/log_pool", dir_fd:15, meta_fd:16, log_pool_meta:{curr_total_size:21474836480, next_total_size:21474836480, status:0}, min_block_id:0, max_block_id:320, min_log_disk_size_for_all_tenants_:0, is_inited:true}) [2024-09-13 13:02:36.373924] INFO [SERVER.OMT] run1 (ob_tenant_node_balancer.cpp:133) [20103][OmtNodeBalancer][T0][YB42AC103323-000621F920760C7F-0-0] [lt=13] refresh tenant config(tenants=[], ret=-5019) [2024-09-13 13:02:36.374937] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.375197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.375216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.375226] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.375237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.375251] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756375250, replica_locations:[]}) [2024-09-13 13:02:36.375265] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.375285] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.375295] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.375314] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.375351] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564252470, cache_obj->added_lc()=false, cache_obj->get_object_id()=785, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.375844] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.376010] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.376035] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.376044] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.376054] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.376066] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756376066, replica_locations:[]}) [2024-09-13 13:02:36.376080] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.376098] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.376107] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.376130] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.376165] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564253284, cache_obj->added_lc()=false, cache_obj->get_object_id()=784, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.376409] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.376583] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.376604] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.376615] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.376625] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.376638] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756376638, replica_locations:[]}) [2024-09-13 13:02:36.376685] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1898785, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.376986] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.377227] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.377246] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.377253] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.377261] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.377271] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756377271, replica_locations:[]}) [2024-09-13 13:02:36.377308] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1866563, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.383976] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.385534] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.386974] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:2420) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=11] dump tenant info(tenant={id:1, tenant_meta:{unit:{tenant_id:1, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"hidden_sys_unit", resource:{min_cpu:2, max_cpu:2, memory_size:"3GB", log_disk_size:"0GB", min_iops:9223372036854775807, max_iops:9223372036854775807, iops_weight:2}}, mode:0, create_timestamp:1726203737966288, is_removed:false}, super_block:{tenant_id:1, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:1}, unit_min_cpu:"2.000000000000000000e+00", unit_max_cpu:"2.000000000000000000e+00", total_worker_cnt:25, min_worker_cnt:10, max_worker_cnt:150, stopped:0, worker_us:179135357, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:143, recv_lp_rpc_cnt:0, recv_mysql_cnt:2, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:928, workers:10, nesting workers:8, req_queue:total_size=1 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=1 queue[5]=0 , multi_level_queue:total_size=28 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=28 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=60 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:{group_id:10, queue_size:0, recv_req_cnt:18, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:2, nesting_worker_cnt:0, token_change:1726203739127015}{group_id:5, queue_size:0, recv_req_cnt:37, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:2, nesting_worker_cnt:0, token_change:1726203738351773}{group_id:19, queue_size:0, recv_req_cnt:1, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:1, nesting_worker_cnt:0, token_change:1726203741946044}{group_id:9, queue_size:0, recv_req_cnt:2680, min_worker_cnt:2, max_worker_cnt:150, multi_level_queue_:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , worker_cnt:2, nesting_worker_cnt:0, token_change:1726203738260543}, rpc_stat_info: pcode=0x14a:cnt=1558 pcode=0x717:cnt=69 pcode=0x710:cnt=20 pcode=0x51c:cnt=17 pcode=0x4a9:cnt=10, token_change_ts:1726203738244760, tenant_role:1}) [2024-09-13 13:02:36.387487] INFO [SERVER.OMT] print_throttled_time (ob_tenant.cpp:1666) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=511] dump throttled time info(id_=1, throttled_time_log=group_id: 10, group: OBCG_LOC_CACHE, throttled_time: 0;group_id: 5, group: OBCG_ID_SERVICE, throttled_time: 0;group_id: 19, group: OBCG_STORAGE, throttled_time: 0;group_id: 9, group: OBCG_DETECT_RS, throttled_time: 0;tenant_id: 1, tenant_throttled_time: 0;) [2024-09-13 13:02:36.387501] INFO [SERVER.OMT] run1 (ob_multi_tenant.cpp:2420) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=14] dump tenant info(tenant={id:508, tenant_meta:{unit:{tenant_id:508, unit_id:1000, has_memstore:true, unit_status:"NORMAL", config:{unit_config_id:1000, name:"virtual_tenant_unit", resource:{min_cpu:5, max_cpu:5, memory_size:"1GB", log_disk_size:"2GB", min_iops:10000, max_iops:50000, iops_weight:0}}, mode:0, create_timestamp:1726203736354211, is_removed:false}, super_block:{tenant_id:508, replay_start_point:ObLogCursor{file_id=1, log_id=1, offset=0}, ls_meta_entry:[-1](ver=0,mode=0,seq=0), tablet_meta_entry:[-1](ver=0,mode=0,seq=0), is_hidden:true, version:2}, create_status:1}, unit_min_cpu:"5.000000000000000000e+00", unit_max_cpu:"5.000000000000000000e+00", total_worker_cnt:30, min_worker_cnt:22, max_worker_cnt:150, stopped:0, worker_us:1951570, recv_hp_rpc_cnt:0, recv_np_rpc_cnt:0, recv_lp_rpc_cnt:0, recv_mysql_cnt:0, recv_task_cnt:0, recv_large_req_cnt:0, tt_large_quries:0, pop_normal_cnt:39773, workers:22, nesting workers:8, req_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 , multi_level_queue:total_size=0 queue[0]=0 queue[1]=0 queue[2]=0 queue[3]=0 queue[4]=0 queue[5]=0 queue[6]=0 queue[7]=0 queue[8]=0 queue[9]=0 , recv_level_rpc_cnt:cnt[0]=0 cnt[1]=0 cnt[2]=0 cnt[3]=0 cnt[4]=0 cnt[5]=0 cnt[6]=0 cnt[7]=0 cnt[8]=0 cnt[9]=0 , group_map:, rpc_stat_info:, token_change_ts:1726203736360714, tenant_role:0}) [2024-09-13 13:02:36.387956] INFO [SERVER.OMT] print_throttled_time (ob_tenant.cpp:1666) [20102][MultiTenant][T0][Y0-0000000000000000-0-0] [lt=454] dump throttled time info(id_=508, throttled_time_log=tenant_id: 508, tenant_throttled_time: 0;) [2024-09-13 13:02:36.389871] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.390119] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.390137] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.390147] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.390158] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.390171] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756390170, replica_locations:[]}) [2024-09-13 13:02:36.390185] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.390205] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.390215] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.390246] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.390285] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564267403, cache_obj->added_lc()=false, cache_obj->get_object_id()=786, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.391047] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.391246] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.391265] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.391276] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.391287] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.391303] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756391302, replica_locations:[]}) [2024-09-13 13:02:36.391312] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=18] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.391363] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1884107, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.392474] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.392809] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.392827] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.392850] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.392857] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.392865] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756392865, replica_locations:[]}) [2024-09-13 13:02:36.392886] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.392901] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.392909] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.392932] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.392965] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564270083, cache_obj->added_lc()=false, cache_obj->get_object_id()=787, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.393131] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=15] ====== tenant freeze timer task ====== [2024-09-13 13:02:36.393158] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=16][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:36.393624] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.393870] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.393898] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.393917] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.393927] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.393935] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756393934, replica_locations:[]}) [2024-09-13 13:02:36.393967] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1849903, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.395916] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.404478] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.405599] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.405846] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.405896] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.405919] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.405933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.405949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.405979] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756405979, replica_locations:[]}) [2024-09-13 13:02:36.406000] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.406024] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.406043] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.406069] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.406111] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564283228, cache_obj->added_lc()=false, cache_obj->get_object_id()=788, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.407067] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.407296] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.407317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.407332] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.407348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.407365] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756407364, replica_locations:[]}) [2024-09-13 13:02:36.407420] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1868049, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.409912] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=17][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.410144] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.410435] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.410462] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.410469] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.410476] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.410484] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756410484, replica_locations:[]}) [2024-09-13 13:02:36.410494] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.410533] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.410540] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.410554] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.410592] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564287708, cache_obj->added_lc()=false, cache_obj->get_object_id()=789, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.411307] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.411557] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.411574] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.411580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.411588] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.411626] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=33] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756411625, replica_locations:[]}) [2024-09-13 13:02:36.411677] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1832194, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.422626] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.422944] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.422962] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.422973] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.422984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.422997] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756422996, replica_locations:[]}) [2024-09-13 13:02:36.423012] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.423031] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.423042] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.423067] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.423106] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564300223, cache_obj->added_lc()=false, cache_obj->get_object_id()=790, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.423894] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.424093] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.424114] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.424125] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.424135] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.424147] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756424147, replica_locations:[]}) [2024-09-13 13:02:36.424187] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1851282, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.425577] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.426916] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.428869] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.429229] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.429245] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.429254] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756429253, replica_locations:[]}) [2024-09-13 13:02:36.429267] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.429285] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.429290] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.429322] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.429358] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564306477, cache_obj->added_lc()=false, cache_obj->get_object_id()=791, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.430080] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.430385] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.430412] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.430421] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756430420, replica_locations:[]}) [2024-09-13 13:02:36.430485] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1813386, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.436548] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.437227] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.438543] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.440193] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.440355] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.440579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.440596] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.440610] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756440610, replica_locations:[]}) [2024-09-13 13:02:36.440627] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.440646] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.440656] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.440689] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.440730] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564317848, cache_obj->added_lc()=false, cache_obj->get_object_id()=792, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.441509] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.441531] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.441821] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.441833] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.441842] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756441841, replica_locations:[]}) [2024-09-13 13:02:36.441890] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1833579, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.444137] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.445221] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.447535] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.448717] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.448776] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.448791] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.449062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.449086] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.449099] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756449099, replica_locations:[]}) [2024-09-13 13:02:36.449120] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.449145] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.449158] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.449183] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.449245] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=26][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564326361, cache_obj->added_lc()=false, cache_obj->get_object_id()=793, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.450033] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.450058] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690067-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.450353] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.450817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.450838] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.450854] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756450853, replica_locations:[]}) [2024-09-13 13:02:36.450923] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=19000, remain_us=1792948, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.454594] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.455662] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.459065] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.459350] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.459362] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.459371] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756459371, replica_locations:[]}) [2024-09-13 13:02:36.459385] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.459400] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.459405] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.459427] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.459466] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564336586, cache_obj->added_lc()=false, cache_obj->get_object_id()=794, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.460171] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.460452] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.460464] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.460472] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756460472, replica_locations:[]}) [2024-09-13 13:02:36.460508] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1814961, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.460961] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD5-0-0] [lt=42][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756460497) [2024-09-13 13:02:36.460989] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD5-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203756460497}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:36.460999] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.461017] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756460993) [2024-09-13 13:02:36.461030] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756360974, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.461043] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:36.461064] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.461070] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.461074] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756461052) [2024-09-13 13:02:36.461201] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.462465] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.469009] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.470101] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.470162] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.470376] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.470400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.470419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.470429] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756470429, replica_locations:[]}) [2024-09-13 13:02:36.470451] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.470483] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.470492] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.470526] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.470559] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564347679, cache_obj->added_lc()=false, cache_obj->get_object_id()=795, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.471282] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.471525] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.471542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.471551] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756471550, replica_locations:[]}) [2024-09-13 13:02:36.471589] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1772281, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.472213] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.476295] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.476681] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.477349] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.477558] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.477669] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.478017] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.478727] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.478860] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.478997] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.479011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.479019] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756479019, replica_locations:[]}) [2024-09-13 13:02:36.479032] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.479050] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.479058] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.479075] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.479106] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564356226, cache_obj->added_lc()=false, cache_obj->get_object_id()=796, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.479839] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.480078] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.480092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.480100] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756480099, replica_locations:[]}) [2024-09-13 13:02:36.480137] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1795332, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.487404] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.488573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.491750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.492065] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.492079] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.492088] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756492087, replica_locations:[]}) [2024-09-13 13:02:36.492113] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.492130] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.492147] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.492164] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.492193] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564369313, cache_obj->added_lc()=false, cache_obj->get_object_id()=797, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.492885] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.493105] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.493125] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.493137] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756493137, replica_locations:[]}) [2024-09-13 13:02:36.493175] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=21000, remain_us=1750695, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.495157] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.496006] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=14] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:36.496645] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.498058] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.499164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.499302] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.499756] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.499769] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.499778] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756499777, replica_locations:[]}) [2024-09-13 13:02:36.499791] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.499807] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.499815] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.499838] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.499868] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564376988, cache_obj->added_lc()=false, cache_obj->get_object_id()=798, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.500576] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.500801] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.500818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.500829] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756500828, replica_locations:[]}) [2024-09-13 13:02:36.500866] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1774603, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.509761] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.511303] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.513494] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:36.514398] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.514945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.514965] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.514977] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756514977, replica_locations:[]}) [2024-09-13 13:02:36.514990] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.515013] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.515022] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.515041] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.515080] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564392199, cache_obj->added_lc()=false, cache_obj->get_object_id()=799, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.515916] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.516181] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.516197] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.516220] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756516219, replica_locations:[]}) [2024-09-13 13:02:36.516264] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1727607, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.520189] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.521074] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.521303] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.521326] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.521338] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756521338, replica_locations:[]}) [2024-09-13 13:02:36.521358] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.521383] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.521396] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.521418] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.521424] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.521476] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564398591, cache_obj->added_lc()=false, cache_obj->get_object_id()=800, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.522351] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.522543] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.522566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.522581] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756522581, replica_locations:[]}) [2024-09-13 13:02:36.522637] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1752833, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.522680] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.524057] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.536589] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.538024] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.538445] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.539062] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.539082] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.539092] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756539091, replica_locations:[]}) [2024-09-13 13:02:36.539108] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.539124] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.539130] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.539163] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.539202] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564416320, cache_obj->added_lc()=false, cache_obj->get_object_id()=801, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.540034] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.540269] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.540287] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.540295] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756540294, replica_locations:[]}) [2024-09-13 13:02:36.540334] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1703536, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.543809] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.544507] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.544526] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.544536] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756544535, replica_locations:[]}) [2024-09-13 13:02:36.544549] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.544567] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.544586] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.544608] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.544641] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564421760, cache_obj->added_lc()=false, cache_obj->get_object_id()=802, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.545417] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:36.545866] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.545906] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=38] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.545920] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756545919, replica_locations:[]}) [2024-09-13 13:02:36.545974] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1729495, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.561068] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.561111] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756561062) [2024-09-13 13:02:36.561127] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756461040, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.561156] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.561165] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.561170] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756561141) [2024-09-13 13:02:36.564003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.564025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.564036] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756564035, replica_locations:[]}) [2024-09-13 13:02:36.564052] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.564075] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.564099] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:36.564117] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:36.564158] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564441276, cache_obj->added_lc()=false, cache_obj->get_object_id()=803, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:36.565257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.565277] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.565285] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756565284, replica_locations:[]}) [2024-09-13 13:02:36.565327] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=24000, remain_us=1678544, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.568368] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.568387] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.568400] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756568399, replica_locations:[]}) [2024-09-13 13:02:36.568413] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.568428] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.569420] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.569447] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.569457] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756569456, replica_locations:[]}) [2024-09-13 13:02:36.569497] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1705973, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.589784] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.589804] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.589815] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756589814, replica_locations:[]}) [2024-09-13 13:02:36.589833] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.589849] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.590891] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.590911] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.590919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756590918, replica_locations:[]}) [2024-09-13 13:02:36.590961] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1652909, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.592902] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.592922] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.592932] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756592931, replica_locations:[]}) [2024-09-13 13:02:36.592942] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.592957] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.593913] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.593934] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.593943] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756593942, replica_locations:[]}) [2024-09-13 13:02:36.593981] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=24000, remain_us=1681489, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.616477] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.616498] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.616509] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756616508, replica_locations:[]}) [2024-09-13 13:02:36.616541] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=31] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.616562] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.617626] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.617648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.617657] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756617656, replica_locations:[]}) [2024-09-13 13:02:36.617696] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=26000, remain_us=1626174, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.618361] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.618378] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.618388] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756618387, replica_locations:[]}) [2024-09-13 13:02:36.618402] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.618420] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.619418] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.619443] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.619452] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756619451, replica_locations:[]}) [2024-09-13 13:02:36.619491] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=25000, remain_us=1655978, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.627476] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=36] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:36.638801] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=26][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:72, tid:19944}]) [2024-09-13 13:02:36.644089] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.644111] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.644122] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.644133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.644145] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756644145, replica_locations:[]}) [2024-09-13 13:02:36.644169] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.644188] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.644909] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.644925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.644932] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.644942] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.644954] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756644953, replica_locations:[]}) [2024-09-13 13:02:36.644967] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.644986] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.645252] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.645292] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.645303] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.645313] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.645325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756645324, replica_locations:[]}) [2024-09-13 13:02:36.645369] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1598502, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.645982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.645998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.646005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.646015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.646025] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756646024, replica_locations:[]}) [2024-09-13 13:02:36.646061] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1629409, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.661137] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.661158] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.661166] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756661123) [2024-09-13 13:02:36.661158] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD6-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756660668) [2024-09-13 13:02:36.661189] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.661182] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD6-0-0] [lt=17][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203756660668}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:36.661206] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756661185) [2024-09-13 13:02:36.661213] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756561140, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.661224] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.661230] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.661233] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756661221) [2024-09-13 13:02:36.672574] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.672594] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.672601] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.672609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.672620] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756672619, replica_locations:[]}) [2024-09-13 13:02:36.672631] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.672655] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.672759] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.672807] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=47][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.672818] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.672828] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.672841] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756672840, replica_locations:[]}) [2024-09-13 13:02:36.672854] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.672871] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.673965] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.673983] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.673989] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.674000] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.674011] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756674010, replica_locations:[]}) [2024-09-13 13:02:36.674058] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1601411, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.674150] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.674169] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.674175] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.674186] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.674194] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756674193, replica_locations:[]}) [2024-09-13 13:02:36.674244] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1569627, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.696097] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:36.701534] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.701554] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.701561] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.701569] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.701578] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756701578, replica_locations:[]}) [2024-09-13 13:02:36.701592] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.701612] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.702739] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.702782] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=41][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.702789] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.702796] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.702808] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756702807, replica_locations:[]}) [2024-09-13 13:02:36.702821] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.702844] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.702888] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.702904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.702915] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.702927] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.702942] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756702941, replica_locations:[]}) [2024-09-13 13:02:36.702999] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=28000, remain_us=1572471, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.703957] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.703983] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.703995] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.704003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.704011] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756704010, replica_locations:[]}) [2024-09-13 13:02:36.704049] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1539821, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.713830] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:36.728301] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:36.728432] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:36.731509] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.731535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.731545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.731552] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.731562] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756731561, replica_locations:[]}) [2024-09-13 13:02:36.731574] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.731596] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.732707] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.732737] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.732744] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.732757] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.732770] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756732769, replica_locations:[]}) [2024-09-13 13:02:36.732829] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1542641, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.733457] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.733477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.733490] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.733501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.733516] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756733515, replica_locations:[]}) [2024-09-13 13:02:36.733541] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.733565] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.734660] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.734680] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.734688] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.734698] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.734706] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756734705, replica_locations:[]}) [2024-09-13 13:02:36.734743] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1509127, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.738923] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=20][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:2137, tid:19944}]) [2024-09-13 13:02:36.761256] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.761280] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756761251) [2024-09-13 13:02:36.761291] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756661219, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.761311] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.761319] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.761327] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756761297) [2024-09-13 13:02:36.762273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.762298] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.762308] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.762318] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.762339] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756762339, replica_locations:[]}) [2024-09-13 13:02:36.762353] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.762370] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:29, local_retry_times:29, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.762385] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.762394] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.762401] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.762405] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.762417] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:36.763181] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.763208] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.763486] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.763501] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.763507] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.763516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.763526] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756763525, replica_locations:[]}) [2024-09-13 13:02:36.763535] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.763546] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.763553] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.763561] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:36.763569] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:36.763577] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:36.763592] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:36.763603] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.763608] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.763614] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:36.763618] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:36.763626] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:36.763632] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.763640] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:36.763644] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:36.763651] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:36.763654] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:36.763662] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:36.763666] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:36.763675] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:36.763683] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.763691] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:36.763695] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:36.763703] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:36.763709] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=30, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.763724] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] will sleep(sleep_us=30000, remain_us=1511746, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.765184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.765201] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.765207] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.765235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.765251] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756765251, replica_locations:[]}) [2024-09-13 13:02:36.765265] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.765277] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:30, local_retry_times:30, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.765290] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.765301] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.765307] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.765311] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.765331] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:36.766001] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.766038] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=36][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.766376] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.766393] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.766399] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.766406] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.766416] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756766415, replica_locations:[]}) [2024-09-13 13:02:36.766429] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.766435] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.766458] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.766480] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:36.766486] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:36.766494] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:36.766506] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:36.766515] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.766521] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.766528] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:36.766533] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:36.766539] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:36.766544] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.766553] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:36.766562] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:36.766569] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:36.766573] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:36.766579] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:36.766587] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:36.766595] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:36.766602] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.766613] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:36.766624] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:36.766635] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:36.766643] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=31, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.766661] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14] will sleep(sleep_us=31000, remain_us=1477209, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.794084] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.794106] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.794112] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.794119] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.794127] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756794127, replica_locations:[]}) [2024-09-13 13:02:36.794138] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.794150] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:30, local_retry_times:30, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.794166] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.794174] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.794181] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.794185] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.794203] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:36.794937] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.794962] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.795244] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.795258] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.795264] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.795271] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.795278] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756795277, replica_locations:[]}) [2024-09-13 13:02:36.795291] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.795297] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.795303] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.795314] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:36.795324] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:36.795331] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:36.795343] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:36.795352] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.795358] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.795365] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:36.795369] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:36.795376] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:36.795399] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.795407] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:36.795411] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:36.795417] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:36.795421] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:36.795428] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:36.795433] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:36.795452] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:36.795459] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.795467] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:36.795471] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:36.795478] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:36.795482] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=31, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.795496] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] will sleep(sleep_us=31000, remain_us=1479973, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.798190] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.798212] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.798219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.798226] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.798236] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756798235, replica_locations:[]}) [2024-09-13 13:02:36.798249] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.798264] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:31, local_retry_times:31, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.798277] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.798288] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.798295] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.798299] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.798326] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:36.799005] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.799027] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.799363] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.799377] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.799382] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.799389] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.799397] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756799396, replica_locations:[]}) [2024-09-13 13:02:36.799409] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.799428] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.799434] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.799451] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:36.799457] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:36.799465] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:36.799476] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:36.799485] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.799490] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.799495] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:36.799498] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:36.799505] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:36.799516] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.799524] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:36.799528] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:36.799535] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:36.799540] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:36.799546] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:36.799551] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:36.799561] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:36.799569] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.799576] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:36.799580] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:36.799588] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:36.799597] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=32, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.799609] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] will sleep(sleep_us=32000, remain_us=1444262, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.799626] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:36.827019] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.827044] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.827051] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.827059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.827071] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756827070, replica_locations:[]}) [2024-09-13 13:02:36.827086] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.827103] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:31, local_retry_times:31, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.827120] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.827132] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.827139] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.827143] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.827158] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:36.828081] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.828109] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.828433] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.828454] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.828460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.828466] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.828474] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756828473, replica_locations:[]}) [2024-09-13 13:02:36.828486] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.828493] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.828502] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.828525] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=22][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:36.828533] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:36.828541] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:36.828554] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:36.828561] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.828566] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.828574] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:36.828578] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:36.828585] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:36.828592] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.828600] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:36.828605] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:36.828612] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:36.828616] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:36.828623] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:36.828630] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:36.828639] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:36.828647] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.828655] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:36.828659] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:36.828665] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:36.828672] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=32, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.828689] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] will sleep(sleep_us=32000, remain_us=1446781, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.832077] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.832099] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.832106] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.832114] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.832123] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756832122, replica_locations:[]}) [2024-09-13 13:02:36.832138] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.832153] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:32, local_retry_times:32, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.832167] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.832194] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.832203] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.832206] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.832224] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:36.833036] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.833060] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.833380] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.833394] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.833403] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.833412] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.833423] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756833423, replica_locations:[]}) [2024-09-13 13:02:36.833474] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=50][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.833484] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.833493] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:36.833504] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:36.833510] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:36.833514] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:36.833526] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:36.833536] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.833541] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:36.833553] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:36.833557] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:36.833563] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:36.833569] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:36.833577] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:36.833582] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:36.833588] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:36.833592] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:36.833599] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:36.833623] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:36.833631] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:36.833637] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:36.833650] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:36.833655] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:36.833662] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:36.833666] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=33, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:36.833681] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] will sleep(sleep_us=33000, remain_us=1410190, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.845812] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:36.845839] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=26][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:36.845866] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:36.845901] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=34][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:36.845922] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=11] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:36.861274] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD7-0-0] [lt=29][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756860823) [2024-09-13 13:02:36.861284] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.861302] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.861308] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.861310] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.861319] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.861320] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.861301] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD7-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203756860823}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:36.861327] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756861293) [2024-09-13 13:02:36.861333] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756861332, replica_locations:[]}) [2024-09-13 13:02:36.861348] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.861350] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.861366] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756861338) [2024-09-13 13:02:36.861371] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:32, local_retry_times:32, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:36.861377] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756761297, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.861387] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.861393] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.861399] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.861402] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.861406] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756861388) [2024-09-13 13:02:36.861409] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:36.861415] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:36.862650] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.862669] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.862676] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.862683] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.862691] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756862690, replica_locations:[]}) [2024-09-13 13:02:36.862735] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1412734, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.867200] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.867220] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.867230] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.867237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.867246] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756867246, replica_locations:[]}) [2024-09-13 13:02:36.867272] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.867293] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.868415] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.868433] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.868452] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.868462] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.868470] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756868469, replica_locations:[]}) [2024-09-13 13:02:36.868509] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=34000, remain_us=1375362, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.868973] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B58-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:36.869004] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B58-0-0] [lt=30][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203756868569], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:36.869343] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE8-0-0] [lt=1][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203756869007, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035797, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203756867998}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:36.869372] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE8-0-0] [lt=28][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:36.869896] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE8-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:36.872901] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.873782] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.873798] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:36.896197] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=14] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:36.896387] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.896405] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.896412] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.896423] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.896446] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756896445, replica_locations:[]}) [2024-09-13 13:02:36.896456] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.896473] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.897525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.897543] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.897549] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.897555] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.897566] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756897565, replica_locations:[]}) [2024-09-13 13:02:36.897606] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1377863, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.902974] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.903003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.903015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.903036] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.903053] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756903052, replica_locations:[]}) [2024-09-13 13:02:36.903067] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.903088] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.904186] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.904212] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.904223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.904235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.904247] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756904246, replica_locations:[]}) [2024-09-13 13:02:36.904298] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1339572, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.914150] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=38] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:36.932137] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.932159] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.932166] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.932173] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.932184] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756932183, replica_locations:[]}) [2024-09-13 13:02:36.932195] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.932217] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.933543] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.933563] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.933573] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.933582] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.933597] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756933596, replica_locations:[]}) [2024-09-13 13:02:36.933659] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=35000, remain_us=1341811, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.939152] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=19][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4719, dropped:110, tid:20300}]) [2024-09-13 13:02:36.939492] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=2][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.939761] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.939795] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.939806] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.939817] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.939830] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756939829, replica_locations:[]}) [2024-09-13 13:02:36.939844] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.939865] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.940407] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.940780] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.940974] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.940995] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.941005] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.941028] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.941040] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756941039, replica_locations:[]}) [2024-09-13 13:02:36.941081] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1302790, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.960495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.961354] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD8-0-0] [lt=33][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756960895) [2024-09-13 13:02:36.961385] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD8-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203756960895}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:36.961412] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:36.961429] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:36.961452] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203756961405) [2024-09-13 13:02:36.961461] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756861386, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:36.961488] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.961496] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:36.961502] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203756961473) [2024-09-13 13:02:36.961950] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.968899] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.969182] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.969203] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.969209] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.969220] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.969232] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756969231, replica_locations:[]}) [2024-09-13 13:02:36.969246] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.969268] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.970300] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.970526] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.970545] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.970552] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.970559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.970571] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756970570, replica_locations:[]}) [2024-09-13 13:02:36.970620] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1304850, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:36.977276] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.977741] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.977762] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.977769] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.977780] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.977790] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756977790, replica_locations:[]}) [2024-09-13 13:02:36.977804] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:36.977837] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:36.977946] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.978786] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.979044] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.979067] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:36.979078] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:36.979107] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:36.979123] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203756979122, replica_locations:[]}) [2024-09-13 13:02:36.979179] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1264692, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:36.979478] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.993591] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:36.995148] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.006849] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.007162] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.007187] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.007198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.007210] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.007224] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757007223, replica_locations:[]}) [2024-09-13 13:02:37.007239] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.007267] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.008194] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.008434] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.008461] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.008471] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.008482] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.008494] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757008493, replica_locations:[]}) [2024-09-13 13:02:37.008540] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=37000, remain_us=1266929, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.016352] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.016708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.016725] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.016734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.016744] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.016772] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757016771, replica_locations:[]}) [2024-09-13 13:02:37.016786] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.016801] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.017674] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.017991] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.018008] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.018015] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.018022] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.018030] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757018030, replica_locations:[]}) [2024-09-13 13:02:37.018056] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.018068] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=38000, remain_us=1225802, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.020159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.027794] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.029280] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.039298] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=20][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:0, dropped:90, tid:19944}]) [2024-09-13 13:02:37.045742] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.046140] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.046166] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.046177] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.046189] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.046203] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757046203, replica_locations:[]}) [2024-09-13 13:02:37.046219] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.046254] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.046264] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.046285] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.046326] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564923443, cache_obj->added_lc()=false, cache_obj->get_object_id()=833, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.047163] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.047417] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.047435] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.047471] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=35] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.047482] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.047494] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757047494, replica_locations:[]}) [2024-09-13 13:02:37.047540] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=38000, remain_us=1227930, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.056238] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.056604] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.056632] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.056639] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.056648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.056657] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757056656, replica_locations:[]}) [2024-09-13 13:02:37.056670] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.056689] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.056697] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.056712] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.056752] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564933870, cache_obj->added_lc()=false, cache_obj->get_object_id()=834, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.057573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.057887] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.057912] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.057918] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.057926] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.057935] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757057934, replica_locations:[]}) [2024-09-13 13:02:37.057975] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=39000, remain_us=1185895, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.059790] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.061297] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.061407] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD9-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757060979) [2024-09-13 13:02:37.061443] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AD9-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757060979}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.061475] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.061492] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.061502] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757061462) [2024-09-13 13:02:37.062814] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.064319] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.085769] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.086107] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.086127] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.086133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.086143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.086156] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757086156, replica_locations:[]}) [2024-09-13 13:02:37.086169] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.086189] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.086198] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.086213] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.086254] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564963372, cache_obj->added_lc()=false, cache_obj->get_object_id()=835, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.087084] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.087335] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.087354] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.087360] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.087367] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.087376] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757087376, replica_locations:[]}) [2024-09-13 13:02:37.087648] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=39000, remain_us=1187822, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.093614] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=12] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.093626] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.093986] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=14] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.094112] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=21] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.094257] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.094712] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=12] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.094790] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=14] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.094900] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=11] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.095126] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.096283] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:37.097162] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.097782] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.097803] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.097812] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.097821] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.097852] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=24] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757097851, replica_locations:[]}) [2024-09-13 13:02:37.097871] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.097909] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.097921] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.097954] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.098018] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6564975120, cache_obj->added_lc()=false, cache_obj->get_object_id()=836, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.098868] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.099045] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=43][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.099358] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.099377] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.099386] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.099399] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.099419] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757099418, replica_locations:[]}) [2024-09-13 13:02:37.099488] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=40000, remain_us=1144383, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.100433] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.101815] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.103322] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.114512] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:37.119733] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=17] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:37.126888] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.127357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.127382] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.127391] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.127402] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.127415] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757127414, replica_locations:[]}) [2024-09-13 13:02:37.127429] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.127463] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.127471] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.127497] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.127537] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565004655, cache_obj->added_lc()=false, cache_obj->get_object_id()=837, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.128397] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.128628] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.128648] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.128655] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.128662] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.128669] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757128669, replica_locations:[]}) [2024-09-13 13:02:37.128714] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=40000, remain_us=1146756, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.136080] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.137583] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.139449] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=22][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-5019, dropped:49, tid:19878}]) [2024-09-13 13:02:37.139689] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.139827] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC86-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.139930] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.139950] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.139956] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.139964] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.139977] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757139976, replica_locations:[]}) [2024-09-13 13:02:37.139990] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.140006] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.140021] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.140039] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.140079] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565017197, cache_obj->added_lc()=false, cache_obj->get_object_id()=838, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.140916] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.141146] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.141165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.141171] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.141178] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.141190] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757141189, replica_locations:[]}) [2024-09-13 13:02:37.141230] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1102640, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.144920] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.146603] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=40][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.161526] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:37.161546] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADA-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757161066) [2024-09-13 13:02:37.161562] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757161520) [2024-09-13 13:02:37.161575] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203756961471, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:37.161566] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADA-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757161066}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.161613] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.161624] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.161631] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757161591) [2024-09-13 13:02:37.161649] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.161659] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.161664] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757161645) [2024-09-13 13:02:37.167726] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB223F-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.168529] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2243-0-0] [lt=27][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.168831] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2244-0-0] [lt=14][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.168964] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.169226] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2248-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.169230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.169269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=38][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.169279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.169289] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.169301] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757169300, replica_locations:[]}) [2024-09-13 13:02:37.169315] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.169334] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.169343] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.169362] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.169402] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565046520, cache_obj->added_lc()=false, cache_obj->get_object_id()=839, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.169454] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2249-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.169849] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB224D-0-0] [lt=7][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.170091] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB224E-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.170393] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.170503] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2252-0-0] [lt=11][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.170600] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.170628] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.170635] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.170643] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.170651] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757170650, replica_locations:[]}) [2024-09-13 13:02:37.170694] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=41000, remain_us=1104775, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.170746] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2253-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.171280] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2257-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.174144] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.175618] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.182418] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.182642] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.182662] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.182668] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.182678] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.182687] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757182686, replica_locations:[]}) [2024-09-13 13:02:37.182700] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.182720] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.182728] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.182757] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.182910] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=115][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565060028, cache_obj->added_lc()=false, cache_obj->get_object_id()=840, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.183689] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.183926] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.183945] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.183951] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.183958] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.183969] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757183968, replica_locations:[]}) [2024-09-13 13:02:37.184011] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1059859, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.189154] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.190660] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.197919] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=20] PNIO [ratelimit] time: 1726203757197917, bytes: 4711511, bw: 0.256929 MB/s, add_ts: 1000345, add_bytes: 269503 [2024-09-13 13:02:37.211927] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.212297] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.212314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.212321] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.212329] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.212340] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757212340, replica_locations:[]}) [2024-09-13 13:02:37.212355] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.212373] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.212382] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.212410] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.212475] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565089592, cache_obj->added_lc()=false, cache_obj->get_object_id()=841, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.213133] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.213486] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.213816] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.213830] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.213836] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.213846] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.213855] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757213854, replica_locations:[]}) [2024-09-13 13:02:37.213912] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=42000, remain_us=1061558, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.214706] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.218435] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782EB-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.218858] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=22] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:37.226224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.226485] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.226505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.226512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.226529] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.226540] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757226540, replica_locations:[]}) [2024-09-13 13:02:37.226554] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.226575] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.226584] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.226605] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.226647] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565103765, cache_obj->added_lc()=false, cache_obj->get_object_id()=842, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.227564] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.227765] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.227782] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.227796] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.227804] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.227813] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757227812, replica_locations:[]}) [2024-09-13 13:02:37.227859] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=1016012, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.228396] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=30] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:37.228539] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=17] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:37.229640] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=16] gc stale ls task succ [2024-09-13 13:02:37.234252] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.234869] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=13] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:37.235725] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.239365] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=16][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:37.239387] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:37.239393] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:37.239401] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:37.253284] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.254813] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.256069] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.256477] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.256495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.256501] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.256509] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.256519] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757256518, replica_locations:[]}) [2024-09-13 13:02:37.256556] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=34] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.256580] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.256590] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.256611] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.256658] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565133774, cache_obj->added_lc()=false, cache_obj->get_object_id()=843, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.258241] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.258644] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.258662] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.258668] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.258676] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.258685] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757258685, replica_locations:[]}) [2024-09-13 13:02:37.258733] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=43000, remain_us=1016737, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.261719] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:37.261744] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=25][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757261712) [2024-09-13 13:02:37.261754] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203757161588, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:37.261782] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.261791] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.261796] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757261771) [2024-09-13 13:02:37.267895] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:37.267997] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C8F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.268306] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.268321] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.268328] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.268337] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.268363] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6][errcode=0] server is initiating(server_id=0, local_seq=58, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:37.269356] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:37.269383] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=25][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:37.269391] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:37.269405] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=13][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:37.269412] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:37.269417] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:37.269423] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:37.269430] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:37.269434] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:37.269449] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=14][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:37.269453] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:37.269457] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=3][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:37.269461] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:37.269465] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:37.269474] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:37.269489] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=14][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:37.269497] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:37.269502] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:37.269507] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:37.269513] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:37.269520] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:37.269533] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:37.269546] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=10][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:37.269551] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:37.269555] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:37.269566] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:37.269575] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.269580] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:37.269584] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:37.269590] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:37.269595] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:37.269603] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203757269230, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:37.269610] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=7][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:37.269614] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:37.269665] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=9][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:37.269675] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=9][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:37.269681] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:37.269686] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:37.269694] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=7][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:37.269700] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=5][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:37.269707] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C8F-0-0] [lt=6][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:37.271048] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.271320] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.271340] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.271350] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.271362] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.271375] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757271375, replica_locations:[]}) [2024-09-13 13:02:37.271390] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.271414] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:37.271430] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.271450] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.271472] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.271511] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565148629, cache_obj->added_lc()=false, cache_obj->get_object_id()=844, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.272348] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.272615] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.272638] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.272649] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.272660] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.272672] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757272671, replica_locations:[]}) [2024-09-13 13:02:37.272722] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=44000, remain_us=971149, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.273449] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=20] PNIO [ratelimit] time: 1726203757273447, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007611, add_bytes: 0 [2024-09-13 13:02:37.280311] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.282102] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.294458] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.295956] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.296366] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:37.301909] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.302264] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.302283] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.302289] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.302296] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.302306] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757302305, replica_locations:[]}) [2024-09-13 13:02:37.302319] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.302337] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:37.302354] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.302364] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.302390] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.302430] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565179547, cache_obj->added_lc()=false, cache_obj->get_object_id()=845, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.303554] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.303840] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.303858] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.303865] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.303872] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.303890] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757303890, replica_locations:[]}) [2024-09-13 13:02:37.303935] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=44000, remain_us=971534, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.314828] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:37.316913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.317210] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.317231] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.317242] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.317253] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.317266] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757317266, replica_locations:[]}) [2024-09-13 13:02:37.317281] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.317302] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.317312] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.317331] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.317368] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565194486, cache_obj->added_lc()=false, cache_obj->get_object_id()=846, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.318162] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.318387] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.318406] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.318416] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.318427] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.318449] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757318448, replica_locations:[]}) [2024-09-13 13:02:37.318492] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=45000, remain_us=925378, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.327787] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.329235] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.332969] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=17] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:37.336495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.337913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.346337] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:37.346385] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:37.346401] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:37.346404] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CDE-0-0] [lt=18][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203757346362}) [2024-09-13 13:02:37.348117] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.348518] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.348536] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.348542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.348551] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.348561] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757348560, replica_locations:[]}) [2024-09-13 13:02:37.348575] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.348596] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.348605] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.348622] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.348666] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565225783, cache_obj->added_lc()=false, cache_obj->get_object_id()=847, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.349561] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.349572] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=18] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:37.349854] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.349872] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.349888] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.349896] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.349904] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757349903, replica_locations:[]}) [2024-09-13 13:02:37.349973] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=925497, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.361671] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADB-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757361235) [2024-09-13 13:02:37.361702] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADB-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757361235}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.361732] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.361747] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.361754] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757361719) [2024-09-13 13:02:37.363688] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.364002] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.364025] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.364036] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.364055] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.364068] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757364068, replica_locations:[]}) [2024-09-13 13:02:37.364083] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.364105] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.364115] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.364140] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.364179] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565241297, cache_obj->added_lc()=false, cache_obj->get_object_id()=848, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.365014] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.365269] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.365296] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.365307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.365323] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.365336] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757365335, replica_locations:[]}) [2024-09-13 13:02:37.365380] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=46000, remain_us=878490, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.369454] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B59-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:37.369475] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B59-0-0] [lt=20][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203757369043], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:37.369957] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE9-0-0] [lt=9][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203757369555, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035838, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203757369305}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:37.369989] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE9-0-0] [lt=31][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.370494] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DE9-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.375723] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.377159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.379463] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.380934] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.395159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.395531] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.395551] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.395558] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.395569] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.395582] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757395581, replica_locations:[]}) [2024-09-13 13:02:37.395597] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.395618] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.395625] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.395650] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.395692] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565272808, cache_obj->added_lc()=false, cache_obj->get_object_id()=849, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.396678] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.396989] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.397006] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.397012] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.397019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.397027] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757397026, replica_locations:[]}) [2024-09-13 13:02:37.397074] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=46000, remain_us=878396, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.411569] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.411981] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.412007] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.412014] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.412022] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.412031] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757412031, replica_locations:[]}) [2024-09-13 13:02:37.412045] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.412067] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.412086] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.412108] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.412149] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565289268, cache_obj->added_lc()=false, cache_obj->get_object_id()=850, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.413081] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=141][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.413329] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.413345] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.413352] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.413361] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.413372] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757413371, replica_locations:[]}) [2024-09-13 13:02:37.413416] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=830455, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.423444] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.424912] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.424931] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.426316] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.443273] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.443778] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.443799] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.443806] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.443815] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.443826] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757443825, replica_locations:[]}) [2024-09-13 13:02:37.443841] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.443864] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.443883] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.443905] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.443950] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565321067, cache_obj->added_lc()=false, cache_obj->get_object_id()=851, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.445019] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.445482] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.445501] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.445508] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.445516] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.445525] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757445525, replica_locations:[]}) [2024-09-13 13:02:37.445575] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=829895, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.452429] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690068-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.460603] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.460912] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.460947] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=34][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.460954] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.460961] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.460975] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757460974, replica_locations:[]}) [2024-09-13 13:02:37.460988] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.461011] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.461019] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.461045] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.461089] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565338206, cache_obj->added_lc()=false, cache_obj->get_object_id()=852, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.461781] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADC-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757461307) [2024-09-13 13:02:37.461793] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:37.461806] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADC-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757461307}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.461814] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757461785) [2024-09-13 13:02:37.461830] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203757261771, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:37.461857] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:37.461900] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.461913] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.461919] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757461886) [2024-09-13 13:02:37.461937] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.461946] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.461954] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757461932) [2024-09-13 13:02:37.462025] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.462225] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.462259] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.462273] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.462284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.462294] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757462294, replica_locations:[]}) [2024-09-13 13:02:37.462339] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=48000, remain_us=781532, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.468531] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.470139] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.474985] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.476498] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.492746] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.493072] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.493089] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.493095] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.493102] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.493114] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757493113, replica_locations:[]}) [2024-09-13 13:02:37.493127] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.493149] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.493158] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.493182] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.493233] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565370350, cache_obj->added_lc()=false, cache_obj->get_object_id()=853, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.494081] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.494326] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.494342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.494348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.494358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.494366] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757494366, replica_locations:[]}) [2024-09-13 13:02:37.494409] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=781060, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.496450] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:37.510540] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.510796] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.510815] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.510822] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.510832] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.510840] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757510840, replica_locations:[]}) [2024-09-13 13:02:37.510868] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.510898] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.510903] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.510920] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.510957] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565388075, cache_obj->added_lc()=false, cache_obj->get_object_id()=854, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.511704] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.511987] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.512005] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.512012] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.512021] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.512032] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757512032, replica_locations:[]}) [2024-09-13 13:02:37.512079] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=731792, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.512775] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:37.514644] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.515114] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:37.516070] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.526188] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.527842] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.542599] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.543012] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.543045] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.543056] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.543069] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.543084] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757543083, replica_locations:[]}) [2024-09-13 13:02:37.543099] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.543125] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.543140] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.543169] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.543236] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565420350, cache_obj->added_lc()=false, cache_obj->get_object_id()=855, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.544176] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.544421] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.544447] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.544458] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.544470] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.544483] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757544482, replica_locations:[]}) [2024-09-13 13:02:37.544552] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=730917, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.546603] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=23][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:37.561287] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.561591] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.561599] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.561640] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.561648] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.561656] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.561667] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757561666, replica_locations:[]}) [2024-09-13 13:02:37.561681] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.561704] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.561713] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.561733] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.561774] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565438892, cache_obj->added_lc()=false, cache_obj->get_object_id()=856, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.562006] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:37.562116] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=107][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757562000) [2024-09-13 13:02:37.562129] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203757461854, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:37.562152] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.562161] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.562166] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757562139) [2024-09-13 13:02:37.562766] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.563031] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.563072] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.563088] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.563097] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.563108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.563116] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757563115, replica_locations:[]}) [2024-09-13 13:02:37.563164] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=680707, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.578567] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.580046] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.593750] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.594063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.594088] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.594100] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.594117] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.594132] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757594131, replica_locations:[]}) [2024-09-13 13:02:37.594147] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.594172] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.594183] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.594211] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.594256] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565471373, cache_obj->added_lc()=false, cache_obj->get_object_id()=857, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.595301] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.595583] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.595622] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.595638] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.595650] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.595665] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757595664, replica_locations:[]}) [2024-09-13 13:02:37.595721] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=679748, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.609741] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.611299] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.613403] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.613671] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.613713] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.613738] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=24] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.613799] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=57] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.613850] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=40] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757613848, replica_locations:[]}) [2024-09-13 13:02:37.613918] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=64] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.613975] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.613998] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.614058] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.614137] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565491250, cache_obj->added_lc()=false, cache_obj->get_object_id()=858, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.615867] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.616109] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.616150] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=38] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.616182] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757616180, replica_locations:[]}) [2024-09-13 13:02:37.616304] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1] will sleep(sleep_us=51000, remain_us=627568, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.628229] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=34] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:37.631747] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.633282] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.645984] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.646454] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.646483] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.646501] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757646499, replica_locations:[]}) [2024-09-13 13:02:37.646521] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.646548] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.646557] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.646578] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.646642] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565523757, cache_obj->added_lc()=false, cache_obj->get_object_id()=859, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.647679] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.647975] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.647993] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.648002] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757648001, replica_locations:[]}) [2024-09-13 13:02:37.648053] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=627416, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.659018] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.660822] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.661896] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADD-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757661454) [2024-09-13 13:02:37.661928] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADD-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757661454}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.661957] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.661969] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.661976] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757661940) [2024-09-13 13:02:37.667519] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.667841] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.667859] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.667869] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757667869, replica_locations:[]}) [2024-09-13 13:02:37.667916] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=44] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.667940] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.667949] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.667971] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.668013] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565545130, cache_obj->added_lc()=false, cache_obj->get_object_id()=860, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.668953] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.669159] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.669175] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.669184] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757669183, replica_locations:[]}) [2024-09-13 13:02:37.669232] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=574639, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.685919] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.687584] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.696532] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:37.699240] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.699512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.699532] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.699543] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757699542, replica_locations:[]}) [2024-09-13 13:02:37.699558] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.699581] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.699592] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.699611] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.699655] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565576772, cache_obj->added_lc()=false, cache_obj->get_object_id()=861, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.700677] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.700918] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.700936] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.700945] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757700945, replica_locations:[]}) [2024-09-13 13:02:37.700995] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=574474, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.709684] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.711467] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.715455] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:37.721425] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.721977] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.721998] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.722009] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757722009, replica_locations:[]}) [2024-09-13 13:02:37.722021] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.722042] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.722055] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.722089] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.722137] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565599255, cache_obj->added_lc()=false, cache_obj->get_object_id()=862, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.723111] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.723335] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.723348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.723361] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757723360, replica_locations:[]}) [2024-09-13 13:02:37.723408] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=520463, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.728490] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=11] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:37.728627] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:37.740205] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=24][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:18, tid:20197}]) [2024-09-13 13:02:37.741122] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.742610] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.753186] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.753541] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.753564] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.753570] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.753578] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.753589] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757753588, replica_locations:[]}) [2024-09-13 13:02:37.753602] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.753626] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.753634] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.753659] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.753705] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565630821, cache_obj->added_lc()=false, cache_obj->get_object_id()=863, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.754672] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.755023] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.755042] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.755048] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.755058] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.755070] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757755069, replica_locations:[]}) [2024-09-13 13:02:37.755119] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=520350, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.761224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.761999] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADE-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757761545) [2024-09-13 13:02:37.762006] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:37.762019] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADE-0-0] [lt=19][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757761545}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.762038] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757762000) [2024-09-13 13:02:37.762051] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203757562139, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:37.762074] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.762083] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.762089] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757762062) [2024-09-13 13:02:37.762100] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.762107] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.762111] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757762097) [2024-09-13 13:02:37.762768] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.776607] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.776937] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.776959] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.776968] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.776976] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.776999] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757776999, replica_locations:[]}) [2024-09-13 13:02:37.777013] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.777031] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.777040] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.777056] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.777095] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565654214, cache_obj->added_lc()=false, cache_obj->get_object_id()=864, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.778020] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.778235] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.778253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.778260] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.778267] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.778287] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757778287, replica_locations:[]}) [2024-09-13 13:02:37.778330] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=465540, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.797159] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.798740] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.808283] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.808686] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.808706] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.808712] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.808719] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.808730] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757808730, replica_locations:[]}) [2024-09-13 13:02:37.808743] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.808764] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.808773] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.808790] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.808828] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565685946, cache_obj->added_lc()=false, cache_obj->get_object_id()=865, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.809663] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.810024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.810043] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.810049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.810058] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.810069] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757810069, replica_locations:[]}) [2024-09-13 13:02:37.810114] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1] will sleep(sleep_us=54000, remain_us=465356, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.813496] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.814991] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.820868] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=37][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:37.832519] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.832848] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.832868] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.832884] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.832894] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.832906] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757832905, replica_locations:[]}) [2024-09-13 13:02:37.832919] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.832939] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.832947] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.832980] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.833017] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565710135, cache_obj->added_lc()=false, cache_obj->get_object_id()=866, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.833810] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.834041] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.834058] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.834065] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.834072] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.834079] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757834079, replica_locations:[]}) [2024-09-13 13:02:37.834121] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=409750, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.840355] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=23][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1409, tid:19945}]) [2024-09-13 13:02:37.846801] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.846829] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=26][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:37.846854] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:37.846871] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:37.846906] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=28] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:37.854259] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.855816] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.862184] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:37.862212] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757862177) [2024-09-13 13:02:37.862222] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203757762060, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:37.862246] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.862257] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.862262] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757862229) [2024-09-13 13:02:37.864296] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.864756] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.864777] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.864788] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.864800] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.864822] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757864822, replica_locations:[]}) [2024-09-13 13:02:37.864840] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.864860] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:54, local_retry_times:54, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:37.864888] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=23][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.864902] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.864914] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.864923] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.864931] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:37.864953] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:37.864965] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.865013] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565742129, cache_obj->added_lc()=false, cache_obj->get_object_id()=867, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.865911] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.865951] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=39][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.866040] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.866379] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.866399] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.866409] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.866420] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.866432] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757866431, replica_locations:[]}) [2024-09-13 13:02:37.866466] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=32][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.866477] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.866487] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.866500] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:37.866509] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:37.866518] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:37.866546] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=27][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:37.866560] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.866568] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.866578] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:37.866586] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:37.866594] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:37.866604] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:37.866614] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:37.866622] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:37.866639] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.866662] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=39][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:37.866670] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:37.866679] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:37.866687] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:37.866721] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:37.866731] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:37.866740] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:37.866748] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:37.866761] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:37.866769] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=55, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:37.866788] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] will sleep(sleep_us=55000, remain_us=408682, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.868109] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.869965] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5A-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:37.869984] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5A-0-0] [lt=18][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203757869503], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:37.870526] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEA-0-0] [lt=7][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203757870093, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035848, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203757869233}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:37.870550] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEA-0-0] [lt=24][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.871152] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEA-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:37.873018] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.873308] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.873512] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=9] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:37.889300] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.889575] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.889595] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.889611] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.889621] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.889634] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757889633, replica_locations:[]}) [2024-09-13 13:02:37.889646] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.889665] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:55, local_retry_times:55, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:37.889688] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.889700] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.889711] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.889718] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.889722] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:37.889736] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:37.889750] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.889793] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565766911, cache_obj->added_lc()=false, cache_obj->get_object_id()=868, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.890640] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.890663] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.890746] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.891002] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.891019] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.891025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.891033] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.891044] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757891043, replica_locations:[]}) [2024-09-13 13:02:37.891061] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.891070] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.891079] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.891094] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=14][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:37.891105] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:37.891114] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:37.891128] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:37.891138] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.891144] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.891152] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:37.891161] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:37.891165] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:37.891174] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:37.891183] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:37.891188] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:37.891195] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:37.891199] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:37.891207] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:37.891212] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:37.891222] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:37.891230] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:37.891235] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:37.891245] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:37.891253] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:37.891257] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=56, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:37.891274] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] will sleep(sleep_us=56000, remain_us=352596, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.896612] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:37.912493] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.913959] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.915771] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=23] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:37.920692] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.920963] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5165, clean_start_pos=1258290, clean_num=125829) [2024-09-13 13:02:37.921949] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.922245] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.922314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.922329] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.922335] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.922342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.922371] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757922370, replica_locations:[]}) [2024-09-13 13:02:37.922389] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.922407] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:55, local_retry_times:55, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:37.922423] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.922432] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.922448] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.922454] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.922457] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:37.922469] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:37.922479] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.922534] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565799635, cache_obj->added_lc()=false, cache_obj->get_object_id()=869, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.923470] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.923513] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=42][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.923593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.923914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.923930] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.923935] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.923944] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.923952] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757923951, replica_locations:[]}) [2024-09-13 13:02:37.923985] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=30][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.923997] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.924004] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.924016] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:37.924021] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:37.924029] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:37.924040] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:37.924048] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.924053] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.924061] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:37.924065] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:37.924070] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:37.924076] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:37.924084] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:37.924088] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:37.924093] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:37.924096] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:37.924102] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:37.924106] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:37.924130] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:37.924143] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:37.924150] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:37.924155] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:37.924161] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:37.924165] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=56, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:37.924181] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] will sleep(sleep_us=56000, remain_us=351288, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:37.947448] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.947751] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.947771] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.947778] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.947787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.947795] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757947795, replica_locations:[]}) [2024-09-13 13:02:37.947809] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.947830] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:56, local_retry_times:56, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:37.947845] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.947851] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.947859] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.947864] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.947867] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:37.947892] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:37.947904] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.947940] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565825058, cache_obj->added_lc()=false, cache_obj->get_object_id()=870, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.948777] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.948821] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=43][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.948919] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.949141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.949156] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.949162] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.949169] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.949179] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757949178, replica_locations:[]}) [2024-09-13 13:02:37.949197] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.949210] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.949222] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.949236] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:37.949253] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:37.949258] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:37.949270] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:37.949293] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.949298] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.949306] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:37.949310] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:37.949314] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:37.949320] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:37.949325] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:37.949330] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:37.949342] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:37.949346] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:37.949351] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:37.949355] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:37.949365] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:37.949373] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:37.949382] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:37.949387] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:37.949392] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:37.949396] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=57, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:37.949411] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=10] will sleep(sleep_us=57000, remain_us=294459, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:37.962168] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADF-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203757961704) [2024-09-13 13:02:37.962198] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6ADF-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203757961704}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:37.962225] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.962239] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:37.962251] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203757962213) [2024-09-13 13:02:37.971495] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.973057] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.975840] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.977305] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.980298] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.980684] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.980703] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.980709] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.980717] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.980727] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757980727, replica_locations:[]}) [2024-09-13 13:02:37.980738] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:37.980755] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:56, local_retry_times:56, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:37.980771] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:37.980779] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:37.980787] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.980793] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:37.980797] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:37.980815] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:37.980842] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=26][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:37.980895] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565858012, cache_obj->added_lc()=false, cache_obj->get_object_id()=871, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:37.981792] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.981813] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.981901] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:37.982251] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.982266] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:37.982274] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:37.982284] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:37.982295] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203757982294, replica_locations:[]}) [2024-09-13 13:02:37.982308] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.982318] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:37.982326] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:37.982337] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:37.982345] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:37.982352] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:37.982364] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:37.982626] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=260][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.982637] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:37.982643] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:37.982648] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:37.982652] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:37.982659] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:37.982668] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:37.982673] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:37.982680] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:37.982720] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=57000, remain_us=292750, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:38.006637] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.007138] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.007160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.007167] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.007176] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.007188] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758007187, replica_locations:[]}) [2024-09-13 13:02:38.007205] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.007235] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.007244] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.007266] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.007329] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=23][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565884444, cache_obj->added_lc()=false, cache_obj->get_object_id()=872, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.008416] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.008640] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.008659] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.008666] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.008676] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.008688] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758008688, replica_locations:[]}) [2024-09-13 13:02:38.008740] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=235131, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:38.031649] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.032032] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.033405] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.033489] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.039921] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.040350] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.040370] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.040376] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.040384] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.040396] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758040395, replica_locations:[]}) [2024-09-13 13:02:38.040411] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.040434] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.040452] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.040486] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.040538] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565917654, cache_obj->added_lc()=false, cache_obj->get_object_id()=873, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.041636] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.041998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.042019] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.042025] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.042034] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.042046] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758042045, replica_locations:[]}) [2024-09-13 13:02:38.042119] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=58000, remain_us=233351, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:38.052346] INFO [SQL.PC] update_memory_conf (ob_plan_cache.cpp:1921) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=4] update plan cache memory config(ob_plan_cache_percentage=5, ob_plan_cache_evict_high_percentage=90, ob_plan_cache_evict_low_percentage=50, tenant_id=1) [2024-09-13 13:02:38.052370] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1462) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=22] start lib cache evict(tenant_id=1, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:38.052378] INFO [SQL.PC] cache_evict (ob_plan_cache.cpp:1479) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=7] end lib cache evict(tenant_id=1, cache_evict_num=0, mem_hold=2097152, mem_limit=161061270, cache_obj_num=1, cache_node_num=1) [2024-09-13 13:02:38.052396] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2678) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=17] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:38.054787] INFO [SQL.PC] dump_all_objs (ob_plan_cache.cpp:2397) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=5] Dumping All Cache Objs(alloc_obj_list.count()=3, alloc_obj_list=[{obj_id:206, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:2, added_to_lc:true, mem_used:157887}, {obj_id:874, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}, {obj_id:875, tenant_id:1, log_del_time:9223372036854775807, real_del_time:9223372036854775807, ref_count:1, added_to_lc:false, mem_used:23272}]) [2024-09-13 13:02:38.054813] INFO [SQL.PC] runTimerTask (ob_plan_cache.cpp:2686) [20140][T1_PlanCacheEvi][T1][Y0-0000000000000000-0-0] [lt=25] schedule next cache evict task(evict_interval=5000000) [2024-09-13 13:02:38.060257] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.062235] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE0-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758061783) [2024-09-13 13:02:38.062249] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:38.062263] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.062255] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE0-0-0] [lt=18][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758061783}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.062280] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758062242) [2024-09-13 13:02:38.062288] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203757862229, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.062310] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.062318] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.062323] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758062297) [2024-09-13 13:02:38.062336] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.062340] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.062343] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758062333) [2024-09-13 13:02:38.066945] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.067273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.067294] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.067301] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.067312] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.067325] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758067325, replica_locations:[]}) [2024-09-13 13:02:38.067339] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.067360] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.067369] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.067403] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.067472] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565944562, cache_obj->added_lc()=false, cache_obj->get_object_id()=874, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.068354] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.068567] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.068590] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.068596] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.068606] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.068614] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758068614, replica_locations:[]}) [2024-09-13 13:02:38.068659] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=175212, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:38.074016] INFO [PALF] runTimerTask (block_gc_timer_task.cpp:101) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=34] BlockGCTimerTask success(ret=0, cost_time_us=9, palf_env_impl_={IPalfEnvImpl:{IPalfEnvImpl:"Dummy"}, self:"172.16.51.35:2882", log_dir:"/data1/oceanbase/data/clog/tenant_1", disk_options_wrapper:{disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}, log_alloc_mgr_:{flying_log_task:0, flying_meta_task:0}}) [2024-09-13 13:02:38.089050] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.090565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.093042] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=61][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.093538] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=9] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.094029] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.094460] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.094506] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.094654] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=10] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.094660] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=20] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.094678] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=8] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.094894] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=16] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.095633] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.095744] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=7] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.099269] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] Cache replace map node details(ret=0, replace_node_count=0, replace_time=2569, replace_start_pos=629140, replace_num=62914) [2024-09-13 13:02:38.099293] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=23] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:38.100293] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.100686] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.100705] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.100712] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.100721] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.100735] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758100734, replica_locations:[]}) [2024-09-13 13:02:38.100757] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.100782] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.100794] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.100821] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.100895] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=28][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6565978013, cache_obj->added_lc()=false, cache_obj->get_object_id()=875, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.102009] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.102325] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.102342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.102348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.102355] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.102367] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758102366, replica_locations:[]}) [2024-09-13 13:02:38.102417] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=59000, remain_us=173053, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:38.119821] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=17] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:38.121297] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:38.125134] INFO [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:92) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8] run rewrite rule refresh task(rule_mgr_->tenant_id_=1) [2024-09-13 13:02:38.125168] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=18][errcode=0] server is initiating(server_id=0, local_seq=59, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:38.126278] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, table_name.ptr()="data_size:14, data:5F5F616C6C5F7379735F73746174", ret=-5019) [2024-09-13 13:02:38.126301] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=21][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_sys_stat, ret=-5019) [2024-09-13 13:02:38.126310] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=8][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_sys_stat, db_name=oceanbase) [2024-09-13 13:02:38.126319] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_sys_stat) [2024-09-13 13:02:38.126329] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=8][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:38.126336] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=6][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:38.126342] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_sys_stat' doesn't exist [2024-09-13 13:02:38.126347] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=4][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:38.126352] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=5][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:38.126357] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:38.126364] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=7][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:38.126370] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:38.126377] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=6][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:38.126381] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=4][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:38.126391] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=5][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:38.126396] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=5][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.126405] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.126412] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:38.126417] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:38.126425] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=6][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:38.126431] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:38.126453] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=19][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:38.126467] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:38.126474] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:38.126478] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:38.126491] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:38.126500] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.126507] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20202][T1_ReqMemEvict][T1][YB42AC103323-000621F921E60C80-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE"}, aret=-5019, ret=-5019) [2024-09-13 13:02:38.126516] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:38.126520] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:38.126525] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:38.126532] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203758126044, sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE) [2024-09-13 13:02:38.126542] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:38.126547] WDIAG [SHARE] fetch_max_id (ob_max_id_fetcher.cpp:482) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute sql failed(sql=SELECT VALUE FROM __all_sys_stat WHERE ZONE = '' AND NAME = 'ob_max_used_rewrite_rule_version' AND TENANT_ID = 0 FOR UPDATE, ret=-5019) [2024-09-13 13:02:38.126603] WDIAG [SQL.QRR] fetch_max_rule_version (ob_udr_sql_service.cpp:141) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] failed to fetch max rule version(ret=-5019, tenant_id=1) [2024-09-13 13:02:38.126612] WDIAG [SQL.QRR] sync_rule_from_inner_table (ob_udr_mgr.cpp:251) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] failed to fetch max rule version(ret=-5019) [2024-09-13 13:02:38.126617] WDIAG [SQL.QRR] runTimerTask (ob_udr_mgr.cpp:94) [20202][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] failed to sync rule from inner table(ret=-5019) [2024-09-13 13:02:38.127174] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.127612] INFO [PALF] log_loop_ (log_loop_thread.cpp:155) [20122][T1_LogLoop][T1][Y0-0000000000000000-0-0] [lt=15] LogLoopThread round_cost_time(us)(round_cost_time=2) [2024-09-13 13:02:38.127858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.128096] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.128114] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.128124] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.128134] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.128147] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758128146, replica_locations:[]}) [2024-09-13 13:02:38.128167] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.128188] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.128197] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.128215] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.128253] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566005372, cache_obj->added_lc()=false, cache_obj->get_object_id()=876, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.129144] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.129357] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.129376] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.129383] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.129393] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.129403] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758129403, replica_locations:[]}) [2024-09-13 13:02:38.129467] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=114403, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:38.140335] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC87-0-0] [lt=21][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:38.147203] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.148745] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.148810] INFO [CLOG] runTimerTask (ob_log_replay_service.cpp:159) [20223][T1_ReplayProces][T1][Y0-0000000000000000-0-0] [lt=8] dump tenant replay process(tenant_id=1, unreplayed_log_size(MB)=0, estimate_time(second)=0, replayed_log_size(MB)=0, last_replayed_log_size(MB)=0, round_cost_time(second)=10, pending_replay_log_size(MB)=0) [2024-09-13 13:02:38.155014] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.156484] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F921A782E0-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.161636] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.161906] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.161928] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.161935] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.161945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.161957] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758161956, replica_locations:[]}) [2024-09-13 13:02:38.161971] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.161995] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.162009] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.162031] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.162071] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566039188, cache_obj->added_lc()=false, cache_obj->get_object_id()=877, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.162417] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.162465] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=46][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758162409) [2024-09-13 13:02:38.162481] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758062295, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.162510] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.162522] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.162530] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758162496) [2024-09-13 13:02:38.163073] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.163146] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE1-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758161873) [2024-09-13 13:02:38.163193] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE1-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758161873}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.163213] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.163241] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.163252] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758163204) [2024-09-13 13:02:38.163306] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.163324] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.163330] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.163340] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.163352] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758163351, replica_locations:[]}) [2024-09-13 13:02:38.163399] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=60000, remain_us=112070, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:38.189664] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.189924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.189943] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.189949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.189958] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.189967] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758189967, replica_locations:[]}) [2024-09-13 13:02:38.189980] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.190002] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.190011] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.190043] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.190083] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566067201, cache_obj->added_lc()=false, cache_obj->get_object_id()=878, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.190992] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921B60C84-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.191193] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.191212] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.191219] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.191229] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.191238] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758191237, replica_locations:[]}) [2024-09-13 13:02:38.191282] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0] will sleep(sleep_us=52588, remain_us=52588, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203758243870) [2024-09-13 13:02:38.193630] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.194362] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=31] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:38.204614] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=19] PNIO [ratelimit] time: 1726203758204612, bytes: 4834017, bw: 0.116054 MB/s, add_ts: 1006695, add_bytes: 122506 [2024-09-13 13:02:38.206333] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.207905] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.213254] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=32] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:38.216409] INFO [CLOG.EXTLOG] resize_log_ext_handler_ (ob_cdc_service.cpp:649) [20225][T1_CdcSrv][T1][Y0-0000000000000000-0-0] [lt=25] finish to resize log external storage handler(current_ts=1726203758216405, tenant_max_cpu=2, valid_ls_v1_count=0, valid_ls_v2_count=0, other_ls_count=0, new_concurrency=0) [2024-09-13 13:02:38.217086] INFO [CLOG] run1 (ob_garbage_collector.cpp:1358) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=11] Garbage Collector is running(seq_=3, gc_interval=10000000) [2024-09-13 13:02:38.217122] INFO [CLOG] gc_check_member_list_ (ob_garbage_collector.cpp:1451) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=15] gc_check_member_list_ cost time(ret=0, time_us=21) [2024-09-13 13:02:38.217135] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1723) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=10] execute_gc cost time(ret=0, time_us=0) [2024-09-13 13:02:38.217142] INFO [CLOG] execute_gc_ (ob_garbage_collector.cpp:1723) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=4] execute_gc cost time(ret=0, time_us=0) [2024-09-13 13:02:38.217146] INFO [SERVER] handle (ob_safe_destroy_handler.cpp:240) [20229][T1_GCCollector][T1][Y0-0000000000000000-0-0] [lt=3] ObSafeDestroyHandler start process [2024-09-13 13:02:38.219516] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=32][errcode=0] server is initiating(server_id=0, local_seq=60, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:38.220221] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782EC-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.220473] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=15] table not exist(tenant_id=1, database_id=201001, table_name=__all_server, table_name.ptr()="data_size:12, data:5F5F616C6C5F736572766572", ret=-5019) [2024-09-13 13:02:38.220493] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=18][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_server, ret=-5019) [2024-09-13 13:02:38.220511] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=17][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_server, db_name=oceanbase) [2024-09-13 13:02:38.220521] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_server) [2024-09-13 13:02:38.220529] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:38.220537] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:38.220542] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_server' doesn't exist [2024-09-13 13:02:38.220550] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:38.220557] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=6][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:38.220562] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=4][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:38.220569] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=6][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:38.220578] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=8][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:38.220582] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:38.220589] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:38.220601] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:38.220609] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.220618] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.220625] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:38.220633] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] fail to handle text query(stmt=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, ret=-5019) [2024-09-13 13:02:38.220639] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:38.220647] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=8][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:38.220660] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:38.220674] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:38.220681] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=6][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:38.220684] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:38.220700] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:38.220712] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.220720] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20111][T1_Occam][T1][YB42AC103323-000621F921F60C80-0-0] [lt=7][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882"}, aret=-5019, ret=-5019) [2024-09-13 13:02:38.220726] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:38.220734] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:38.220738] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:38.220743] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203758220337, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:38.220752] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:38.220757] WDIAG get_my_sql_result_ (ob_table_access_helper.h:435) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-5019] GCTX.sql_proxy_ read failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, tenant_id=1, columns=0x2b07c6c55878, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882, sql=SELECT zone FROM __all_server where svr_ip='172.16.51.35' and svr_port=2882, columns_str="zone") [2024-09-13 13:02:38.220769] WDIAG read_and_convert_to_values_ (ob_table_access_helper.h:332) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-5019] fail to get ObMySQLResult(ret=-5019, ret="OB_TABLE_NOT_EXIST", MTL_ID()=1, table=__all_server, condition=where svr_ip='172.16.51.35' and svr_port=2882) [2024-09-13 13:02:38.220824] WDIAG [COORDINATOR] get_self_zone_name (table_accessor.cpp:634) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-5019] get zone from __all_server failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", columns=0x2b07c6c55878, where_condition="where svr_ip='172.16.51.35' and svr_port=2882", zone_name_holder=) [2024-09-13 13:02:38.220847] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:567) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=22][errcode=-5019] get self zone name failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:38.220852] WDIAG [COORDINATOR] get_all_ls_election_reference_info (table_accessor.cpp:576) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] zone name is empty(ret=-5019, ret="OB_TABLE_NOT_EXIST", all_ls_election_reference_info=[]) [2024-09-13 13:02:38.220857] WDIAG [COORDINATOR] refresh (ob_leader_coordinator.cpp:144) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-5019] get all ls election reference info failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:38.220868] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=5] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:38.223578] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.223832] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.223852] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.223859] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.223870] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.223888] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758223888, replica_locations:[]}) [2024-09-13 13:02:38.223903] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.223931] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.223943] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.223978] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.224021] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566101137, cache_obj->added_lc()=false, cache_obj->get_object_id()=879, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.224256] INFO [STORAGE.TRANS] run1 (ob_xa_trans_heartbeat_worker.cpp:84) [20243][T1_ObXAHbWorker][T1][Y0-0000000000000000-0-0] [lt=24] XA scheduler heartbeat task statistics(avg_time=0) [2024-09-13 13:02:38.225138] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.225352] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.225370] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.225384] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.225398] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.225414] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758225413, replica_locations:[]}) [2024-09-13 13:02:38.225477] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0] will sleep(sleep_us=49992, remain_us=49992, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203758275469) [2024-09-13 13:02:38.227162] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:305) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=6] ====== traversal_flush timer task ====== [2024-09-13 13:02:38.227189] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:338) [20249][T1_Flush][T1][Y0-0000000000000000-0-0] [lt=21] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:38.227309] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:130) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=10] ====== checkpoint timer task ====== [2024-09-13 13:02:38.227327] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:193) [20248][T1_TxCkpt][T1][Y0-0000000000000000-0-0] [lt=13] no logstream(ret=0, ls_cnt=0) [2024-09-13 13:02:38.228270] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:116) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=9] ====== [tabletchange] timer task ======(GC_CHECK_INTERVAL=5000000) [2024-09-13 13:02:38.228288] INFO [STORAGE] runTimerTask (ob_tablet_gc_service.cpp:242) [20251][T1_TabletGC][T1][Y0-0000000000000000-0-0] [lt=14] [tabletchange] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=4) [2024-09-13 13:02:38.228538] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=14] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:38.228699] INFO [CLOG] do_thread_task_ (ob_remote_fetch_log_worker.cpp:250) [20226][T1_RFLWorker][T1][YB42AC103323-000621F920860C7D-0-0] [lt=17] ObRemoteFetchWorker is running(thread_index=0) [2024-09-13 13:02:38.228707] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=17] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:38.229707] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=19] gc stale ls task succ [2024-09-13 13:02:38.230079] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=6] ====== check clog disk timer task ====== [2024-09-13 13:02:38.230097] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=16] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:38.230107] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=6] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:38.230292] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:39) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=6] ====== [emptytablet] empty shell timer task ======(GC_EMPTY_TABLET_SHELL_INTERVAL=5000000) [2024-09-13 13:02:38.230310] INFO [STORAGE] runTimerTask (ob_empty_shell_task.cpp:107) [20252][T1_TabletShell][T1][Y0-0000000000000000-0-0] [lt=12] [emptytablet] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, times=4) [2024-09-13 13:02:38.231861] INFO [SQL.DTL] runTimerTask (ob_dtl_interm_result_manager.cpp:611) [20206][T1_TntSharedTim][T1][Y0-0000000000000000-0-0] [lt=44] clear dtl interm result cost(us)(clear_cost=2344, ret=0, gc_.expire_keys_.count()=0, dump count=0, clean count=0) [2024-09-13 13:02:38.234971] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=28] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:38.239528] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:38.239547] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:38.239553] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=5][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:38.239565] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:38.243290] INFO [STORAGE.TRANS] dump_mapper_info (ob_lock_wait_mgr.h:66) [20231][T1_LockWaitMgr][T1][Y0-0000000000000000-0-0] [lt=18] report RowHolderMapper summary info(count=0, bkt_cnt=248) [2024-09-13 13:02:38.243771] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:104) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=5] tx gc loop thread is running(MTL_ID()=1) [2024-09-13 13:02:38.243783] INFO [STORAGE.TRANS] run1 (ob_tx_loop_worker.cpp:111) [20264][T1_TxLoopWorker][T1][Y0-0000000000000000-0-0] [lt=12] try gc retain ctx [2024-09-13 13:02:38.243970] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203758243871, ctx_timeout_ts=1726203758243871, worker_timeout_ts=1726203758243870, default_timeout=1000000) [2024-09-13 13:02:38.243990] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.243996] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:38.244008] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.244021] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=11][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:38.244036] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.244056] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=19][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.244075] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.244114] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566121232, cache_obj->added_lc()=false, cache_obj->get_object_id()=880, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x24edd926 0x24eddaf0 0x251869b8 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.244802] INFO [ARCHIVE] do_thread_task_ (ob_archive_fetcher.cpp:312) [20255][T1_ArcFetcher][T1][YB42AC103323-000621F920E60C7D-0-0] [lt=18] ObArchiveFetcher is running(thread_index=0) [2024-09-13 13:02:38.245060] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203758243870, ctx_timeout_ts=1726203758243870, worker_timeout_ts=1726203758243870, default_timeout=1000000) [2024-09-13 13:02:38.245083] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=22][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.245092] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:38.245108] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=15][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.245120] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=12][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.245138] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=18][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:38.245174] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=1][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:38.245203] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=27][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.245213] WDIAG [SQL] do_close (ob_result_set.cpp:922) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.245236] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:38.245248] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:38.245260] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=7][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:38.245268] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.245273] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000611) [2024-09-13 13:02:38.245277] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20197][T1_FreInfoReloa][T1][YB42AC103323-000621F921B60C84-0-0] [lt=4][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:38.245283] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.245291] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:38.245306] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:38.245311] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] query failed(ret=-4012, conn=0x2b07a13e06e0, start=1726203756244654, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.245320] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] read failed(ret=-4012) [2024-09-13 13:02:38.245325] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.245352] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566122473, cache_obj->added_lc()=false, cache_obj->get_object_id()=882, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a418dc 0x24ecc792 0x24db7a31 0x1428ea01 0x1428a323 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.245403] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:38.245413] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.245418] WDIAG [SHARE] get_snapshot_gc_scn (ob_global_stat_proxy.cpp:164) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:38.245424] WDIAG [STORAGE] get_global_info (ob_tenant_freeze_info_mgr.cpp:811) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] fail to get global info(ret=-4012, tenant_id=1) [2024-09-13 13:02:38.245429] WDIAG [STORAGE] try_update_info (ob_tenant_freeze_info_mgr.cpp:954) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4012] failed to get global info(ret=-4012) [2024-09-13 13:02:38.245433] WDIAG [STORAGE] runTimerTask (ob_tenant_freeze_info_mgr.cpp:1008) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] fail to try update info(tmp_ret=-4012, tmp_ret="OB_TIMEOUT") [2024-09-13 13:02:38.245468] INFO [STORAGE] try_update_reserved_snapshot (ob_tenant_freeze_info_mgr.cpp:1044) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=29] success to update min reserved snapshot(reserved_snapshot=0, duration=1800, snapshot_gc_ts_=0) [2024-09-13 13:02:38.245480] INFO [STORAGE] try_update_reserved_snapshot (ob_tenant_freeze_info_mgr.cpp:1071) [20197][T1_FreInfoReloa][T1][Y0-0000000000000000-0-0] [lt=11] update reserved snapshot finished(cost_ts=14, reserved_snapshot=0) [2024-09-13 13:02:38.250362] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.250753] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.251380] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.251669] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.251915] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.258922] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1966) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=9] dump_dag_status(dag_cnt=0, map_size=0) [2024-09-13 13:02:38.258935] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1976) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=12] dump_dag_status(running_dag_net_map_size=0, blocking_dag_net_list_size=0) [2024-09-13 13:02:38.258941] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(priority="PRIO_COMPACTION_HIGH", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258949] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=7] dump_dag_status(priority="PRIO_HA_HIGH", low_limit=8, up_limit=8, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258953] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(priority="PRIO_COMPACTION_MID", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258957] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(priority="PRIO_HA_MID", low_limit=5, up_limit=5, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258961] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(priority="PRIO_COMPACTION_LOW", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258966] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(priority="PRIO_HA_LOW", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258970] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(priority="PRIO_DDL", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258974] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(priority="PRIO_DDL_HIGH", low_limit=6, up_limit=6, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258977] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1985) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(priority="PRIO_TTL", low_limit=2, up_limit=2, running_task=0, ready_dag_count=0, waiting_dag_count=0) [2024-09-13 13:02:38.258982] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:0, sys_task_type:3, dag_type_str:"MINI_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:38.258988] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=6] dump_dag_status(type={init_dag_prio:0, sys_task_type:3, dag_type_str:"MINI_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:38.258993] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:2, sys_task_type:5, dag_type_str:"MINOR_EXECUTE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:38.258997] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:2, sys_task_type:5, dag_type_str:"MINOR_EXECUTE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:38.259001] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:4, sys_task_type:6, dag_type_str:"MAJOR_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:38.259005] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:4, sys_task_type:6, dag_type_str:"MAJOR_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:38.259009] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:0, sys_task_type:4, dag_type_str:"TX_TABLE_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:38.259014] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:0, sys_task_type:4, dag_type_str:"TX_TABLE_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:38.259018] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:4, sys_task_type:7, dag_type_str:"WRITE_CKPT", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:38.259022] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:4, sys_task_type:7, dag_type_str:"WRITE_CKPT", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:38.259026] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:0, sys_task_type:19, dag_type_str:"MDS_TABLE_MERGE", dag_module_str:"COMPACTION"}, dag_count=0) [2024-09-13 13:02:38.259030] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:0, sys_task_type:19, dag_type_str:"MDS_TABLE_MERGE", dag_module_str:"COMPACTION"}, scheduled_task_count=0) [2024-09-13 13:02:38.259034] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"DDL", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:38.259038] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"DDL", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:38.259042] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"UNIQUE_CHECK", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:38.259046] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"UNIQUE_CHECK", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:38.259050] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"SQL_BUILD_INDEX", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:38.259055] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:6, sys_task_type:2, dag_type_str:"SQL_BUILD_INDEX", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:38.259059] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:7, sys_task_type:12, dag_type_str:"DDL_KV_MERGE", dag_module_str:"DDL"}, dag_count=0) [2024-09-13 13:02:38.259063] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:7, sys_task_type:12, dag_type_str:"DDL_KV_MERGE", dag_module_str:"DDL"}, scheduled_task_count=0) [2024-09-13 13:02:38.259067] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259071] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259075] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259079] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259083] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259088] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_COMPLETE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259092] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259096] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259100] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259104] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259108] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"SYS_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259112] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"SYS_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259116] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259120] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259124] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"DATA_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259128] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"DATA_TABLETS_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259132] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_GROUP_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259136] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"TABLET_GROUP_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259140] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"MIGRATION_FINISH", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259145] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"MIGRATION_FINISH", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259149] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259153] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"INITIAL_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259157] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259161] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"START_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259165] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259169] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:1, dag_type_str:"FINISH_PREPARE_MIGRATION", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259173] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:3, sys_task_type:1, dag_type_str:"FAST_MIGRATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259177] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:3, sys_task_type:1, dag_type_str:"FAST_MIGRATE", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259181] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:1, dag_type_str:"VALIDATE", dag_module_str:"MIGRATE"}, dag_count=0) [2024-09-13 13:02:38.259185] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:1, dag_type_str:"VALIDATE", dag_module_str:"MIGRATE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259189] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"TABLET_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, dag_count=0) [2024-09-13 13:02:38.259193] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"TABLET_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, scheduled_task_count=0) [2024-09-13 13:02:38.259197] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"FINISH_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, dag_count=0) [2024-09-13 13:02:38.259201] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:16, dag_type_str:"FINISH_BACKFILL_TX", dag_module_str:"BACKFILL_TX"}, scheduled_task_count=0) [2024-09-13 13:02:38.259205] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_META", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259210] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_META", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259214] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_PREPARE", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259218] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_PREPARE", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259222] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_FINISH", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259226] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_FINISH", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259230] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_DATA", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259235] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_DATA", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259239] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"PREFETCH_BACKUP_INFO", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259243] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"PREFETCH_BACKUP_INFO", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259247] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_INDEX_REBUILD", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259251] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_INDEX_REBUILD", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259255] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_COMPLEMENT_LOG", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259259] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:8, dag_type_str:"BACKUP_COMPLEMENT_LOG", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259263] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:10, dag_type_str:"BACKUP_BACKUPSET", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259268] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:10, dag_type_str:"BACKUP_BACKUPSET", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259272] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(type={init_dag_prio:5, sys_task_type:11, dag_type_str:"BACKUP_ARCHIVELOG", dag_module_str:"BACKUP"}, dag_count=0) [2024-09-13 13:02:38.259276] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:11, dag_type_str:"BACKUP_ARCHIVELOG", dag_module_str:"BACKUP"}, scheduled_task_count=0) [2024-09-13 13:02:38.259280] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_LS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259284] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_LS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259288] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_LS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259292] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_LS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259296] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"SYS_TABLETS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259300] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"SYS_TABLETS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259304] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"DATA_TABLETS_META_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259308] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"DATA_TABLETS_META_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259312] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_GROUP_META_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259317] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_GROUP_META_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259321] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_LS_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259325] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_LS_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259329] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259333] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"INITIAL_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259337] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259341] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"START_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259345] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=5] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259349] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"FINISH_TABLET_GROUP_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259353] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_RESTORE", dag_module_str:"RESTORE"}, dag_count=0) [2024-09-13 13:02:38.259357] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:14, dag_type_str:"TABLET_RESTORE", dag_module_str:"RESTORE"}, scheduled_task_count=0) [2024-09-13 13:02:38.259361] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:15, dag_type_str:"BACKUP_CLEAN", dag_module_str:"BACKUP_CLEAN"}, dag_count=0) [2024-09-13 13:02:38.259365] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:5, sys_task_type:15, dag_type_str:"BACKUP_CLEAN", dag_module_str:"BACKUP_CLEAN"}, scheduled_task_count=0) [2024-09-13 13:02:38.259369] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:3, sys_task_type:17, dag_type_str:"REMOVE_MEMBER", dag_module_str:"REMOVE_MEMBER"}, dag_count=0) [2024-09-13 13:02:38.259373] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:3, sys_task_type:17, dag_type_str:"REMOVE_MEMBER", dag_module_str:"REMOVE_MEMBER"}, scheduled_task_count=0) [2024-09-13 13:02:38.259377] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_BACKFILL_TX", dag_module_str:"TRANSFER"}, dag_count=0) [2024-09-13 13:02:38.259381] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_BACKFILL_TX", dag_module_str:"TRANSFER"}, scheduled_task_count=0) [2024-09-13 13:02:38.259385] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_REPLACE_TABLE", dag_module_str:"TRANSFER"}, dag_count=0) [2024-09-13 13:02:38.259389] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:1, sys_task_type:18, dag_type_str:"TRANSFER_REPLACE_TABLE", dag_module_str:"TRANSFER"}, scheduled_task_count=0) [2024-09-13 13:02:38.259393] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1988) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:8, sys_task_type:20, dag_type_str:"TTL_DELTE_DAG", dag_module_str:"TTL"}, dag_count=0) [2024-09-13 13:02:38.259397] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1989) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status(type={init_dag_prio:8, sys_task_type:20, dag_type_str:"TTL_DELTE_DAG", dag_module_str:"TTL"}, scheduled_task_count=0) [2024-09-13 13:02:38.259401] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status[DAG_NET](type="DAG_NET_MIGRATION", dag_net_count=0) [2024-09-13 13:02:38.259405] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status[DAG_NET](type="DAG_NET_PREPARE_MIGRATION", dag_net_count=0) [2024-09-13 13:02:38.259409] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status[DAG_NET](type="DAG_NET_COMPLETE_MIGRATION", dag_net_count=0) [2024-09-13 13:02:38.259412] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status[DAG_NET](type="DAG_NET_TRANSFER", dag_net_count=0) [2024-09-13 13:02:38.259415] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status[DAG_NET](type="DAG_NET_BACKUP", dag_net_count=0) [2024-09-13 13:02:38.259418] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status[DAG_NET](type="DAG_NET_RESTORE", dag_net_count=0) [2024-09-13 13:02:38.259422] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=4] dump_dag_status[DAG_NET](type="DAG_NET_TYPE_BACKUP_CLEAN", dag_net_count=0) [2024-09-13 13:02:38.259425] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1993) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status[DAG_NET](type="DAG_NET_TRANSFER_BACKFILL_TX", dag_net_count=0) [2024-09-13 13:02:38.259429] INFO [COMMON] dump_dag_status (ob_dag_scheduler.cpp:1996) [20196][T1_DagScheduler][T1][Y0-0000000000000000-0-0] [lt=3] dump_dag_status(total_worker_cnt=43, total_running_task_cnt=0, work_thread_num=43, scheduled_task_cnt=0) [2024-09-13 13:02:38.262414] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE2-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758261950) [2024-09-13 13:02:38.262461] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE2-0-0] [lt=38][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758261950}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.262484] WDIAG [PALF] convert_to_ts (scn.cpp:265) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4016] invalid scn should not convert to ts (val_=18446744073709551615) [2024-09-13 13:02:38.262495] INFO [STORAGE.TRANS] print_stat_ (ob_tenant_weak_read_service.cpp:541) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10] [WRS] [TENANT_WEAK_READ_SERVICE] [STAT](tenant_id=1, server_version={version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0}, server_version_delta=1726203758262482, in_cluster_service=false, cluster_version={val:18446744073709551615, v:3}, min_cluster_version={val:18446744073709551615, v:3}, max_cluster_version={val:18446744073709551615, v:3}, get_cluster_version_err=0, cluster_version_delta=-1, cluster_service_master="0.0.0.0:0", cluster_service_tablet_id={id:226}, post_cluster_heartbeat_count=0, succ_cluster_heartbeat_count=0, cluster_heartbeat_interval=1000000, local_cluster_version={val:0, v:0}, local_cluster_delta=1726203758262482, force_self_check=true, weak_read_refresh_interval=100000) [2024-09-13 13:02:38.262539] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.262548] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.262553] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758262524) [2024-09-13 13:02:38.266477] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.268051] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=21][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:38.268066] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.269892] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.270168] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.270185] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.270191] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.270198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.270222] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=5][errcode=0] server is initiating(server_id=0, local_seq=61, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:38.271118] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=8] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:38.271140] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=20][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:38.271148] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:38.271154] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:38.271161] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:38.271169] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:38.271174] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=3][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:38.271181] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:38.271186] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:38.271193] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:38.271197] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=3][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:38.271204] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:38.271208] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:38.271214] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:38.271224] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:38.271229] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=5][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.271238] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.271245] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:38.271249] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:38.271257] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:38.271264] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:38.271277] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:38.271290] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:38.271297] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:38.271302] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:38.271325] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:38.271335] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.271340] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:38.271347] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=6][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:38.271355] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:38.271359] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:38.271364] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203758271008, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:38.271373] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=8][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:38.271377] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:38.271421] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:38.271431] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=10][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:38.271445] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=13][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:38.271452] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=7][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:38.271458] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=4][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:38.271467] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=8][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:38.271473] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C90-0-0] [lt=5][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:38.275559] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203758275470, ctx_timeout_ts=1726203758275470, worker_timeout_ts=1726203758275469, default_timeout=1000000) [2024-09-13 13:02:38.275578] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=19][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.275585] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:38.275594] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.275604] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:38.275617] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.275626] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.275644] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.275679] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566152797, cache_obj->added_lc()=false, cache_obj->get_object_id()=881, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.276451] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203758275469, ctx_timeout_ts=1726203758275469, worker_timeout_ts=1726203758275469, default_timeout=1000000) [2024-09-13 13:02:38.276468] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=16][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.276473] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:38.276485] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.276492] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.276505] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=12][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:38.276529] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=1][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:38.276540] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.276545] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.276563] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:38.276577] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=0][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:38.276593] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:38.276603] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.276608] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=3] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2000240) [2024-09-13 13:02:38.276615] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:38.276621] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.276627] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:38.276631] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=3][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:38.276636] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:38.276647] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.276673] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7E-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566153794, cache_obj->added_lc()=false, cache_obj->get_object_id()=883, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.276716] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=9][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:38.276724] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.276729] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=4][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:38.276735] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:38.276743] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:38.276749] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:38.276754] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=4] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2001286) [2024-09-13 13:02:38.276762] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=7][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:38.276767] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=4] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2001306) [2024-09-13 13:02:38.276776] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=8][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:38.276781] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.276786] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7E-0-0] [lt=5][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:38.276792] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19945][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:38.276799] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19945][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:38.276815] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=4] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:38.276823] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=7] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:38.278298] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.278503] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.278526] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.278537] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.278545] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.278556] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758278555, replica_locations:[]}) [2024-09-13 13:02:38.278595] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] already timeout, do not need sleep(sleep_us=0, remain_us=1998233, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.278685] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.278914] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.278933] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.278942] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.278953] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.278965] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758278964, replica_locations:[]}) [2024-09-13 13:02:38.278978] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.278996] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.279004] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.279019] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.279045] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566156166, cache_obj->added_lc()=false, cache_obj->get_object_id()=884, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.279710] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.279906] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.279922] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.279928] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.279938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.279948] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758279948, replica_locations:[]}) [2024-09-13 13:02:38.279984] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=1000, remain_us=1996845, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.280096] INFO [DETECT] record_summary_info_and_logout_when_necessary_ (ob_lcl_batch_sender_thread.cpp:203) [20240][T1_LCLSender][T1][Y0-0000000000000000-0-0] [lt=21] ObLCLBatchSenderThread periodic report summary info(duty_ratio_percentage=0, total_constructed_detector=0, total_destructed_detector=0, total_alived_detector=0, _lcl_op_interval=30000, lcl_msg_map_.count()=0, *this={this:0x2b07c25fe2b0, is_inited:true, is_running:true, total_record_time:5010000, over_night_times:0}) [2024-09-13 13:02:38.281058] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=20] PNIO [ratelimit] time: 1726203758281056, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007609, add_bytes: 0 [2024-09-13 13:02:38.281150] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.281359] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.281373] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.281379] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.281387] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.281396] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758281395, replica_locations:[]}) [2024-09-13 13:02:38.281407] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.281423] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.281430] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.281465] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.281489] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566158610, cache_obj->added_lc()=false, cache_obj->get_object_id()=885, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.281912] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.282106] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282120] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282126] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.282133] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.282140] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.282146] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:38.282152] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:38.282157] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:38.282258] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20301][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.282254] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.282467] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282480] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282486] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.282493] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.282495] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282502] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282500] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758282500, replica_locations:[]}) [2024-09-13 13:02:38.282507] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.282512] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.282519] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758282519, replica_locations:[]}) [2024-09-13 13:02:38.282530] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:38.282532] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1994297, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.282539] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:38.282703] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:38.282712] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4638] [2024-09-13 13:02:38.282796] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.282971] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282979] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.282984] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.282990] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.282995] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.283000] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:38.283005] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:38.283009] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:38.283074] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.283205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.283216] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.283223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.283230] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.283236] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.283245] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:38.283251] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:38.283255] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:38.283318] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.283464] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.283472] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.283477] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.283482] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.283486] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.283491] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:38.283495] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:38.283498] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:38.283503] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:38.283513] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:38.283517] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:38.284688] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.284906] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.284922] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.284927] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.284937] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.284947] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758284946, replica_locations:[]}) [2024-09-13 13:02:38.284959] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.284972] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.284980] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.284992] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.285017] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566162138, cache_obj->added_lc()=false, cache_obj->get_object_id()=886, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.285630] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.285819] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.285836] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.285843] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.285852] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.285863] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758285862, replica_locations:[]}) [2024-09-13 13:02:38.285906] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1990922, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.289026] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.289205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.289217] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.289225] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.289235] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.289245] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758289245, replica_locations:[]}) [2024-09-13 13:02:38.289257] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.289273] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.289280] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.289294] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.289319] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566166440, cache_obj->added_lc()=false, cache_obj->get_object_id()=887, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.289927] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.290136] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.290152] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.290161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.290170] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.290178] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758290177, replica_locations:[]}) [2024-09-13 13:02:38.290212] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1986616, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.294388] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.294581] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.294598] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.294604] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.294611] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.294619] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758294619, replica_locations:[]}) [2024-09-13 13:02:38.294631] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.294647] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.294655] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.294682] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.294721] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566171831, cache_obj->added_lc()=false, cache_obj->get_object_id()=888, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.295340] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.295519] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.295535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.295541] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.295550] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.295560] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758295560, replica_locations:[]}) [2024-09-13 13:02:38.295594] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1981234, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.299373] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=7] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:38.300776] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.300961] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.300977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.300983] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.300996] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.301011] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758301010, replica_locations:[]}) [2024-09-13 13:02:38.301026] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.301047] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.301058] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.301079] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.301111] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566178229, cache_obj->added_lc()=false, cache_obj->get_object_id()=889, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.301792] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.301987] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.302005] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.302011] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.302020] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.302028] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758302028, replica_locations:[]}) [2024-09-13 13:02:38.302064] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=6000, remain_us=1974764, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.308269] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.308471] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.308489] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.308495] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.308502] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.308510] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758308509, replica_locations:[]}) [2024-09-13 13:02:38.308521] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.308554] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:38.308571] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.308579] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.308608] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.308635] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566185756, cache_obj->added_lc()=false, cache_obj->get_object_id()=890, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.309548] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.309904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.309930] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.309941] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.309953] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.309965] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758309964, replica_locations:[]}) [2024-09-13 13:02:38.310004] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1966824, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.317188] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.317475] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.317493] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.317503] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.317514] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.317526] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758317525, replica_locations:[]}) [2024-09-13 13:02:38.317547] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.317584] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.317599] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.317620] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.317648] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566194769, cache_obj->added_lc()=false, cache_obj->get_object_id()=891, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.318322] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.318596] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.318614] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.318625] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.318636] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.318651] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758318650, replica_locations:[]}) [2024-09-13 13:02:38.318700] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=8000, remain_us=1958128, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.321636] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:38.326951] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.327393] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.327426] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.327467] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=40] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.327489] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.327503] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758327502, replica_locations:[]}) [2024-09-13 13:02:38.327518] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.327539] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.327541] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=172][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.327551] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.327582] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.327613] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566204734, cache_obj->added_lc()=false, cache_obj->get_object_id()=892, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.328313] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.328525] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.328546] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.328556] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.328578] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.328591] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758328590, replica_locations:[]}) [2024-09-13 13:02:38.328639] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1948190, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.329068] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.334317] INFO [OCCAM] get_idx (ob_occam_time_guard.h:224) [20233][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=20] init point thread id with(&point=0x55a3873cd6c0, idx_=3849, point=[thread id=20233, timeout ts=08:00:00.0, last click point="(null):(null):0", last click ts=08:00:00.0], thread_id=20233) [2024-09-13 13:02:38.337829] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.338096] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.338115] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.338125] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.338136] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.338149] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758338148, replica_locations:[]}) [2024-09-13 13:02:38.338163] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.338182] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.338202] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.338220] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.338249] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566215370, cache_obj->added_lc()=false, cache_obj->get_object_id()=893, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.338984] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.339170] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.339188] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.339198] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.339209] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.339221] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758339220, replica_locations:[]}) [2024-09-13 13:02:38.339271] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=10000, remain_us=1937557, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.347273] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.347298] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=24][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:38.347330] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:38.347340] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:38.347347] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=4] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:38.347342] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CE4-0-0] [lt=28][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203758347307}) [2024-09-13 13:02:38.347355] INFO [STORAGE.TRANS] statistics (ob_gts_source.cpp:70) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=7] gts statistics(tenant_id=1, gts_rpc_cnt=0, get_gts_cache_cnt=8923, get_gts_with_stc_cnt=0, try_get_gts_cache_cnt=0, try_get_gts_with_stc_cnt=0, wait_gts_elapse_cnt=0, try_wait_gts_elapse_cnt=0) [2024-09-13 13:02:38.347364] WDIAG [STORAGE.TRANS] operator() (ob_ts_mgr.h:175) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-4721] refresh gts failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:38.347370] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=6] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:38.349434] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.349663] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=18] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:38.349698] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.349714] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.349724] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.349735] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.349747] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758349747, replica_locations:[]}) [2024-09-13 13:02:38.349762] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.349781] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.349800] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.349828] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.349859] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566226979, cache_obj->added_lc()=false, cache_obj->get_object_id()=894, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.350574] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.350800] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.350818] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.350828] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.350839] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.350851] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758350850, replica_locations:[]}) [2024-09-13 13:02:38.350903] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1925925, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.362103] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.362383] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.362404] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.362420] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.362432] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.362470] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=32] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758362469, replica_locations:[]}) [2024-09-13 13:02:38.362485] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.362505] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.362515] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.362536] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.362557] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.362569] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566239689, cache_obj->added_lc()=false, cache_obj->get_object_id()=895, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.362599] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758362550) [2024-09-13 13:02:38.362613] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758162495, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.362638] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.362648] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.362655] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758362625) [2024-09-13 13:02:38.363513] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.363792] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.363813] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.363823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.363835] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.363848] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758363847, replica_locations:[]}) [2024-09-13 13:02:38.363902] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1912927, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.370429] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5B-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:38.370456] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5B-0-0] [lt=26][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203758369977], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:38.370930] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEB-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:38.371923] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEB-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203758371218, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035869, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203758370576}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:38.371946] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEB-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:38.376098] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.376310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.376331] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.376342] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.376354] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.376367] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758376366, replica_locations:[]}) [2024-09-13 13:02:38.376381] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.376402] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.376426] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.376523] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.376567] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566253686, cache_obj->added_lc()=false, cache_obj->get_object_id()=896, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.377393] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.377598] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.377627] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.377637] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.377649] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.377660] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758377660, replica_locations:[]}) [2024-09-13 13:02:38.377704] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=13000, remain_us=1899124, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.389629] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.390893] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.391082] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92199005D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.391122] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.391140] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.391150] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.391161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.391174] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758391174, replica_locations:[]}) [2024-09-13 13:02:38.391194] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.391215] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.391225] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.391245] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.391279] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566268398, cache_obj->added_lc()=false, cache_obj->get_object_id()=897, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.392661] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.392928] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.392977] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=47][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.392998] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.393041] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=41] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.393064] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758393063, replica_locations:[]}) [2024-09-13 13:02:38.393123] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1883705, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.394509] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=14] ====== tenant freeze timer task ====== [2024-09-13 13:02:38.394548] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=25][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:38.404236] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=14] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.407351] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.407607] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.407629] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.407640] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.407652] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.407670] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758407669, replica_locations:[]}) [2024-09-13 13:02:38.407692] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.407724] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.407738] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.407772] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.407816] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566284933, cache_obj->added_lc()=false, cache_obj->get_object_id()=898, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.408751] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.409063] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.409105] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.409121] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.409119] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=25][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.409132] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.409179] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=41] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758409178, replica_locations:[]}) [2024-09-13 13:02:38.409249] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=15000, remain_us=1867580, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.423207] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=22][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.424459] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.424722] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.424742] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.424756] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.424772] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.424805] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758424805, replica_locations:[]}) [2024-09-13 13:02:38.424835] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.424859] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.424869] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.424902] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.424944] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566302061, cache_obj->added_lc()=false, cache_obj->get_object_id()=899, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.425854] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.426088] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.426108] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.426118] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.426129] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.426153] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758426141, replica_locations:[]}) [2024-09-13 13:02:38.426200] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1850628, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.442387] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.442654] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.442674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.442684] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.442695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.442708] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758442708, replica_locations:[]}) [2024-09-13 13:02:38.442722] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.442744] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.442754] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.442784] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.442821] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566319940, cache_obj->added_lc()=false, cache_obj->get_object_id()=900, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.443732] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.444018] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.444039] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.444049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.444070] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.444083] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758444082, replica_locations:[]}) [2024-09-13 13:02:38.444126] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1832702, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.454343] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921690069-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.461370] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.461646] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.461674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.461685] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.461698] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.461715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758461714, replica_locations:[]}) [2024-09-13 13:02:38.461732] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.461759] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.461769] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.461794] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.461892] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=33][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566338990, cache_obj->added_lc()=false, cache_obj->get_object_id()=901, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.462613] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE3-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758462085) [2024-09-13 13:02:38.462635] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.462647] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.462653] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758462618) [2024-09-13 13:02:38.462671] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.462644] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE3-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758462085}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.462686] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758462666) [2024-09-13 13:02:38.462697] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758362625, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.462709] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:38.462721] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.462725] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.462728] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758462718) [2024-09-13 13:02:38.463130] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.463367] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.463389] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.463400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.463412] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.463425] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758463424, replica_locations:[]}) [2024-09-13 13:02:38.463487] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=18000, remain_us=1813342, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.478539] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.478913] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.479833] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.480134] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.480489] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=73][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.481658] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.481897] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.481919] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.481930] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.481941] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.481954] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758481953, replica_locations:[]}) [2024-09-13 13:02:38.481969] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.481991] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.482001] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.482023] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.482075] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566359189, cache_obj->added_lc()=false, cache_obj->get_object_id()=902, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.483113] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.483281] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.483301] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.483311] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.483323] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.483334] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758483334, replica_locations:[]}) [2024-09-13 13:02:38.483379] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1793449, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.499464] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=14] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:38.502582] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.502845] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.502866] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.502900] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=33] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.502912] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.502926] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758502925, replica_locations:[]}) [2024-09-13 13:02:38.502952] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.502975] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.502989] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.503026] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.503072] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566380189, cache_obj->added_lc()=false, cache_obj->get_object_id()=903, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.504013] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.504205] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.504225] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.504236] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.504247] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.504259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758504259, replica_locations:[]}) [2024-09-13 13:02:38.504304] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1772525, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.521980] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=22] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:38.524550] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.524901] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.524931] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.524947] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.524965] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.524987] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758524985, replica_locations:[]}) [2024-09-13 13:02:38.525010] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.525041] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.525055] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.525085] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.525143] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566402258, cache_obj->added_lc()=false, cache_obj->get_object_id()=904, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.526415] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.526692] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.526718] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.526748] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=29] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.526766] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.526784] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758526784, replica_locations:[]}) [2024-09-13 13:02:38.526850] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1749979, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.530082] WDIAG [ARCHIVE] do_thread_task_ (ob_archive_sender.cpp:256) [20256][T1_ArcSender][T1][YB42AC103323-000621F920F60C7D-0-0] [lt=29][errcode=-4018] try free send task failed(ret=-4018) [2024-09-13 13:02:38.548083] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.548375] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.548402] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.548419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.548446] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.548468] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758548467, replica_locations:[]}) [2024-09-13 13:02:38.548491] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.548523] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.548553] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=29][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.548595] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.548654] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566425768, cache_obj->added_lc()=false, cache_obj->get_object_id()=905, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.549950] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.550220] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.550246] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.550263] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.550279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.550297] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758550297, replica_locations:[]}) [2024-09-13 13:02:38.550368] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=22000, remain_us=1726460, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.562744] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.562770] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758562737) [2024-09-13 13:02:38.562780] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758462706, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.562799] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.562805] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.562810] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758562786) [2024-09-13 13:02:38.572627] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.572961] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.572993] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.573010] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.573047] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=36] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.573067] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758573066, replica_locations:[]}) [2024-09-13 13:02:38.573090] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.573133] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.573149] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.573178] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.573246] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566450347, cache_obj->added_lc()=false, cache_obj->get_object_id()=906, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.574617] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.574816] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.574840] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.574856] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.574873] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.574902] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758574901, replica_locations:[]}) [2024-09-13 13:02:38.574964] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1701864, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.598239] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.598514] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.598545] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.598562] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.598579] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.598602] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758598601, replica_locations:[]}) [2024-09-13 13:02:38.598645] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=40] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.598679] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.598694] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.598732] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.598790] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566475903, cache_obj->added_lc()=false, cache_obj->get_object_id()=907, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.600127] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.600351] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.600377] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.600393] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.600410] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.600429] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758600428, replica_locations:[]}) [2024-09-13 13:02:38.600510] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1676320, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.624832] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.625154] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.625187] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.625205] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.625223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.625244] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758625243, replica_locations:[]}) [2024-09-13 13:02:38.625269] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.625302] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.625317] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.625347] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.625408] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566502521, cache_obj->added_lc()=false, cache_obj->get_object_id()=908, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.626756] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.626995] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.627024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.627051] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.627068] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.627087] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758627086, replica_locations:[]}) [2024-09-13 13:02:38.627175] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1649654, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.629026] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=40] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:38.652512] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.652792] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.652820] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.652832] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.652844] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.652862] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758652860, replica_locations:[]}) [2024-09-13 13:02:38.652894] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=30] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.652923] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.652934] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.652976] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.653038] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566530151, cache_obj->added_lc()=false, cache_obj->get_object_id()=909, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.654306] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.654478] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.654502] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.654513] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.654525] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.654539] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758654538, replica_locations:[]}) [2024-09-13 13:02:38.654596] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1622232, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.662736] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE4-0-0] [lt=46][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758662222) [2024-09-13 13:02:38.662795] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE4-0-0] [lt=56][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758662222}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.662810] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.662832] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758662802) [2024-09-13 13:02:38.662844] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758562786, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.662871] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.662892] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.662901] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758662855) [2024-09-13 13:02:38.680871] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.681185] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.681212] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.681223] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.681239] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.681259] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758681258, replica_locations:[]}) [2024-09-13 13:02:38.681295] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=33] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.681327] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.681339] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.681367] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.681425] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566558538, cache_obj->added_lc()=false, cache_obj->get_object_id()=910, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.682599] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=33][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.682815] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.682838] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.682851] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.682865] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.682890] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758682889, replica_locations:[]}) [2024-09-13 13:02:38.682955] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1593874, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.699554] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=24] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:38.710232] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.710497] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.710520] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.710526] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.710537] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.710550] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758710548, replica_locations:[]}) [2024-09-13 13:02:38.710563] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.710588] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.710598] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.710628] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.710678] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566587795, cache_obj->added_lc()=false, cache_obj->get_object_id()=911, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.711754] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.711983] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.712003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.712010] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.712021] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.712033] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758712033, replica_locations:[]}) [2024-09-13 13:02:38.712103] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=28000, remain_us=1564726, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.722260] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:38.728620] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=13] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:38.728802] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=19] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:38.740332] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.740633] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.740653] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.740660] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.740672] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.740686] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758740685, replica_locations:[]}) [2024-09-13 13:02:38.740701] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.740727] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.740736] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.740758] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.740805] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566617920, cache_obj->added_lc()=false, cache_obj->get_object_id()=912, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.741887] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.742250] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.742272] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.742281] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.742296] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.742311] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758742311, replica_locations:[]}) [2024-09-13 13:02:38.742370] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1534458, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.762854] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE5-0-0] [lt=37][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758762303) [2024-09-13 13:02:38.762889] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.762912] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758762882) [2024-09-13 13:02:38.762902] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE5-0-0] [lt=46][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758762303}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.762921] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758662853, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.762944] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.762950] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.762954] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758762931) [2024-09-13 13:02:38.762965] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.762969] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.762972] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758762962) [2024-09-13 13:02:38.771613] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.771970] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.771994] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.772004] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.772020] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758772019, replica_locations:[]}) [2024-09-13 13:02:38.772035] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.772058] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.772066] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.772094] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.772139] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566649256, cache_obj->added_lc()=false, cache_obj->get_object_id()=913, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.773537] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.773761] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.773787] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.773803] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758773802, replica_locations:[]}) [2024-09-13 13:02:38.773867] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=30000, remain_us=1502961, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.804106] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.804356] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.804374] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.804386] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758804385, replica_locations:[]}) [2024-09-13 13:02:38.804398] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.804419] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.804428] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.804467] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.804511] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566681629, cache_obj->added_lc()=false, cache_obj->get_object_id()=914, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.805477] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.805710] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.805747] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=35] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.805762] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758805761, replica_locations:[]}) [2024-09-13 13:02:38.805837] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1470991, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.812169] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=14][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:38.837065] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.837458] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.837476] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.837493] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758837492, replica_locations:[]}) [2024-09-13 13:02:38.837508] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.837533] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.837543] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.837565] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.837611] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566714727, cache_obj->added_lc()=false, cache_obj->get_object_id()=915, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.838644] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.838955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.838973] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.838982] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758838981, replica_locations:[]}) [2024-09-13 13:02:38.839031] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1437798, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.841694] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=17][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4018, dropped:11, tid:19945}]) [2024-09-13 13:02:38.847778] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=4][errcode=-4721] nonblock get location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.847800] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_location_service.cpp:150) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=20][errcode=-4721] fail to nonblock get log stream location leader(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, leader="0.0.0.0:0") [2024-09-13 13:02:38.847827] WDIAG [STORAGE.TRANS] get_gts_leader_ (ob_gts_source.cpp:540) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=8][errcode=-4721] gts nonblock get leader failed(ret=-4721, tenant_id=1, GTS_LS={id:1}) [2024-09-13 13:02:38.847839] WDIAG [STORAGE.TRANS] refresh_gts_ (ob_gts_source.cpp:598) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=12][errcode=-4721] get gts leader failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:38.847871] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=5] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:38.863034] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:38.863064] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=18][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758863028) [2024-09-13 13:02:38.863075] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758762928, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:38.863095] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.863101] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.863106] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758863082) [2024-09-13 13:02:38.870924] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5C-0-0] [lt=10] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:38.870943] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5C-0-0] [lt=18][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203758870445], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:38.871226] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.871397] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEC-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:38.871709] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.871733] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.871745] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.871759] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.871772] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758871772, replica_locations:[]}) [2024-09-13 13:02:38.871787] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.871807] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.871818] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.871853] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.871920] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEC-0-0] [lt=9][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203758871630, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035878, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203758871536}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:38.871943] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEC-0-0] [lt=23][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:38.871947] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566749064, cache_obj->added_lc()=false, cache_obj->get_object_id()=916, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.872967] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.873195] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.873269] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.873321] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:38.873445] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.873461] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.873467] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.873477] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.873488] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758873487, replica_locations:[]}) [2024-09-13 13:02:38.873540] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=33000, remain_us=1403289, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.899641] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=23] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=6) [2024-09-13 13:02:38.906678] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.907026] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.907045] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.907052] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.907059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.907072] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758907071, replica_locations:[]}) [2024-09-13 13:02:38.907084] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.907105] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.907114] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.907144] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.907180] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566784299, cache_obj->added_lc()=false, cache_obj->get_object_id()=917, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.908062] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.908275] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.908291] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.908297] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.908304] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.908313] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758908312, replica_locations:[]}) [2024-09-13 13:02:38.908358] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=34000, remain_us=1368471, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.922617] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=28] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:38.941840] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=18][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:1444, tid:20031}]) [2024-09-13 13:02:38.942554] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.942952] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.942979] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.942986] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.942997] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.943009] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758943008, replica_locations:[]}) [2024-09-13 13:02:38.943023] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.943041] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:34, local_retry_times:34, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:38.943058] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.943067] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.943077] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:38.943084] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:38.943088] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:38.943121] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:38.943132] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.943176] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566820293, cache_obj->added_lc()=false, cache_obj->get_object_id()=918, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.944048] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.944072] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.944141] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.944674] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.944688] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.944694] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.944701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.944710] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758944709, replica_locations:[]}) [2024-09-13 13:02:38.944722] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.944730] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.944736] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.944747] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:38.944752] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:38.944760] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:38.944773] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:38.944783] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:38.944789] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:38.944797] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:38.944801] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:38.944806] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:38.944812] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.944821] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:38.944826] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:38.944831] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:38.944835] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:38.944840] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:38.944845] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:38.944856] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:38.944864] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.944869] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:38.944885] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:38.944890] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:38.944895] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=35, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:38.944911] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] will sleep(sleep_us=35000, remain_us=1331918, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:38.962937] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE6-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203758962448) [2024-09-13 13:02:38.962965] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE6-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203758962448}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:38.962993] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.963005] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:38.963013] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203758962980) [2024-09-13 13:02:38.980112] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.980410] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.980431] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.980451] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.980460] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.980473] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758980472, replica_locations:[]}) [2024-09-13 13:02:38.980487] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:38.980506] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:35, local_retry_times:35, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:38.980523] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:38.980532] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:38.980544] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:38.980550] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:38.980553] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:38.980566] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:38.980576] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:38.980619] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566857736, cache_obj->added_lc()=false, cache_obj->get_object_id()=919, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:38.981511] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.981537] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.981620] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:38.981860] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.981884] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:38.981892] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:38.981899] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:38.981909] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203758981909, replica_locations:[]}) [2024-09-13 13:02:38.981922] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.981931] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:38.981940] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:38.981951] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:38.981959] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:38.981967] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:38.981980] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:38.981991] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:38.981999] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:38.982007] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:38.982011] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:38.982015] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:38.982025] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:38.982033] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:38.982038] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:38.982045] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:38.982049] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:38.982056] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:38.982063] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:38.982073] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:38.982081] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:38.982089] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:38.982095] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:38.982102] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:38.982107] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=36, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:38.982123] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] will sleep(sleep_us=36000, remain_us=1294705, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.018338] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.018684] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.018709] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.018716] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.018727] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.018742] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759018741, replica_locations:[]}) [2024-09-13 13:02:39.018757] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.018776] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:36, local_retry_times:36, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:39.018794] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.018800] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.018816] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.018826] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.018830] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:39.018849] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:39.018860] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.018916] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566896033, cache_obj->added_lc()=false, cache_obj->get_object_id()=920, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.019898] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.019924] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.019999] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.020282] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.020297] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.020302] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.020312] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.020322] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759020322, replica_locations:[]}) [2024-09-13 13:02:39.020335] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.020344] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.020353] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.020365] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:39.020370] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:39.020378] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:39.020390] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:39.020401] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.020407] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.020416] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:39.020421] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:39.020428] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:39.020435] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:39.020457] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:39.020463] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:39.020470] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:39.020474] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:39.020481] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:39.020486] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:39.020496] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:39.020502] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.020509] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:39.020515] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:39.020523] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:39.020529] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=37, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.020547] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] will sleep(sleep_us=37000, remain_us=1256282, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=37, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.057764] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.058059] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.058092] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.058099] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.058107] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.058133] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759058117, replica_locations:[]}) [2024-09-13 13:02:39.058149] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=29] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.058164] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:37, local_retry_times:37, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:39.058179] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.058185] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.058193] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.058213] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.058222] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:39.058236] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:39.058244] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.058306] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566935401, cache_obj->added_lc()=false, cache_obj->get_object_id()=921, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.059354] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.059379] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.059464] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.059719] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.059734] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.059739] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.059762] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.059779] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759059778, replica_locations:[]}) [2024-09-13 13:02:39.059793] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.059802] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.059809] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.059820] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:39.059825] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:39.059851] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:39.059866] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:39.059885] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.059892] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.059900] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:39.059904] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:39.059908] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:39.059917] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:39.059926] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:39.059932] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:39.059938] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:39.059942] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:39.059949] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:39.059957] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:39.059968] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:39.059977] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.059994] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:39.060002] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:39.060010] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:39.060016] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=38, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.060034] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] will sleep(sleep_us=38000, remain_us=1216795, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=38, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.063062] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:39.063077] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.063100] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759063056) [2024-09-13 13:02:39.063115] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203758863082, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.063138] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.063147] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.063152] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759063124) [2024-09-13 13:02:39.093219] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=16] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.093412] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=11] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.093581] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=17] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.094655] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=9] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.094740] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.094895] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=25] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.094894] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.095128] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.095493] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=9] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.098224] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.098557] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.098578] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.098584] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.098603] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.098617] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759098616, replica_locations:[]}) [2024-09-13 13:02:39.098631] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.098649] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:38, local_retry_times:38, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:39.098666] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.098676] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.098685] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.098690] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.098693] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:39.098710] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:39.098721] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.098764] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6566975881, cache_obj->added_lc()=false, cache_obj->get_object_id()=922, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.099659] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.099687] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.099720] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=18] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=5) [2024-09-13 13:02:39.099775] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.100070] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.100086] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.100092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.100099] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.100109] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759100108, replica_locations:[]}) [2024-09-13 13:02:39.100121] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.100129] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.100135] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.100144] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:39.100151] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:39.100157] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:39.100169] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:39.100177] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.100182] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.100190] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:39.100195] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:39.100202] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:39.100208] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:39.100217] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:39.100221] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:39.100225] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:39.100230] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:39.100234] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:39.100239] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:39.100249] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:39.100258] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.100267] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:39.100272] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:39.100280] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:39.100284] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=39, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.100302] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] will sleep(sleep_us=39000, remain_us=1176526, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=39, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.119903] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=14] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:39.122958] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=21] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:39.139490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.139907] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.139925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.139932] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.139939] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.139953] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759139952, replica_locations:[]}) [2024-09-13 13:02:39.139967] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.139984] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:39, local_retry_times:39, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:39.140001] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.140010] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.140021] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.140028] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.140032] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:39.140047] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:39.140057] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.140098] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567017216, cache_obj->added_lc()=false, cache_obj->get_object_id()=923, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.140922] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC88-0-0] [lt=19][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.141071] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.141089] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.141184] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.141451] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.141467] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.141473] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.141480] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.141492] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759141491, replica_locations:[]}) [2024-09-13 13:02:39.141505] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.141514] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.141523] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.141534] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:39.141542] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:39.141547] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:39.141561] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:39.141571] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.141577] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.141585] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:39.141593] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:39.141597] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:39.141606] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:39.141615] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:39.141619] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:39.141626] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:39.141630] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:39.141637] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:39.141643] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:39.141654] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:39.141662] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.141670] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:39.141675] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:39.141683] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:39.141690] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=40, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.141707] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] will sleep(sleep_us=40000, remain_us=1135122, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=40, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.163034] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE7-0-0] [lt=42][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759162567) [2024-09-13 13:02:39.163092] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE7-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203759162567}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:39.163123] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.163138] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.163146] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759163110) [2024-09-13 13:02:39.172179] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2259-0-0] [lt=10][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.172765] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB225D-0-0] [lt=17][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.172996] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB225E-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.173490] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2262-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.173709] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2263-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.174466] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2267-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.174707] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2268-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.175209] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB226C-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.175425] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB226D-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.175909] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119ECDB2271-0-0] [lt=8][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.181873] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.182194] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.182215] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.182222] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.182229] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.182244] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759182243, replica_locations:[]}) [2024-09-13 13:02:39.182258] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.182273] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:40, local_retry_times:40, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:39.182301] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.182310] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.182318] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.182323] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.182327] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:39.182347] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:39.182357] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.182413] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567059531, cache_obj->added_lc()=false, cache_obj->get_object_id()=924, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.183297] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.183321] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.183395] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.183679] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.183694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.183700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.183707] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.183715] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759183714, replica_locations:[]}) [2024-09-13 13:02:39.183725] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.183771] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=41000, remain_us=1093057, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=41, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.212223] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=38] PNIO [ratelimit] time: 1726203759212221, bytes: 4959382, bw: 0.118655 MB/s, add_ts: 1007609, add_bytes: 125365 [2024-09-13 13:02:39.212721] INFO [MDS] for_each_ls_in_tenant (mds_tenant_service.cpp:237) [20107][T1_Occam][T1][YB42AC103323-000621F921A60C8C-0-0] [lt=19] for each ls(succ_num=0, ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.220173] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=21] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:39.222247] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782ED-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.224942] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.225310] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.225332] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.225339] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.225346] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.225357] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759225356, replica_locations:[]}) [2024-09-13 13:02:39.225371] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.225392] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.225400] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.225419] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.225470] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567102587, cache_obj->added_lc()=false, cache_obj->get_object_id()=925, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.226459] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.226775] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.226794] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.226800] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.226807] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.226816] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759226816, replica_locations:[]}) [2024-09-13 13:02:39.226866] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=42000, remain_us=1049962, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=42, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.228710] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=16] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:39.228907] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=14] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:39.229768] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=16] gc stale ls task succ [2024-09-13 13:02:39.235066] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=20] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:39.239682] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:39.239703] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=17][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:39.239716] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=11][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:39.239724] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=7][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:39.258399] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, table_name.ptr()="data_size:27, data:5F5F616C6C5F7669727475616C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:39.258434] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=32][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_virtual_ls_meta_table, ret=-5019) [2024-09-13 13:02:39.258454] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=20][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_virtual_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:39.258468] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=12][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_virtual_ls_meta_table) [2024-09-13 13:02:39.258481] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:39.258489] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:39.258500] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] Table 'oceanbase.__all_virtual_ls_meta_table' doesn't exist [2024-09-13 13:02:39.258508] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=7][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:39.258515] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=6][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:39.258521] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=5][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:39.258526] WDIAG [SQL.RESV] resolve_joined_table_item (ob_dml_resolver.cpp:3379) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=4][errcode=-5019] resolve table failed(ret=-5019) [2024-09-13 13:02:39.258531] WDIAG [SQL.RESV] resolve_joined_table (ob_dml_resolver.cpp:2934) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=4][errcode=-5019] resolve joined table item failed(ret=-5019) [2024-09-13 13:02:39.258536] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2788) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=4][errcode=-5019] resolve joined table failed(ret=-5019) [2024-09-13 13:02:39.258541] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:39.258550] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:39.258559] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:39.258568] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:39.258584] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:39.258593] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.258604] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.258613] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:39.258624] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] fail to handle text query(stmt=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;, ret=-5019) [2024-09-13 13:02:39.258635] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=10][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:39.258646] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=9][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.258663] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=14][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:39.258678] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:39.258687] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:39.258695] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:39.258716] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=8][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:39.258728] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20295][BlackListServic][T1][YB42AC103323-000621F921260C84-0-0] [lt=11][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.258740] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20295][BlackListServic][T0][YB42AC103323-000621F921260C84-0-0] [lt=10][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;"}, aret=-5019, ret=-5019) [2024-09-13 13:02:39.258752] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:39.258762] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=10][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:39.258772] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:39.258783] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=9][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203759258092, sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:39.258797] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:111) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=14][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:39.258805] WDIAG [STORAGE.TRANS] do_thread_task_ (ob_black_list.cpp:222) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-5019] fail to execute sql(ret=-5019, ret="OB_TABLE_NOT_EXIST", sql=select a.svr_ip, a.svr_port, a.tenant_id, a.ls_id, a.role, nvl(b.weak_read_scn, 1) as weak_read_scn, nvl(b.migrate_status, 0) as migrate_status, nvl(b.tx_blocked, 0) as tx_blocked from oceanbase.__all_virtual_ls_meta_table a left join oceanbase.__all_virtual_ls_info b on a.svr_ip = b.svr_ip and a.svr_port = b.svr_port and a.tenant_id = b.tenant_id and a.ls_id = b.ls_id;) [2024-09-13 13:02:39.258872] INFO [STORAGE.TRANS] run1 (ob_black_list.cpp:194) [20295][BlackListServic][T0][Y0-0000000000000000-0-0] [lt=11] ls blacklist refresh finish(cost_time=1950) [2024-09-13 13:02:39.263180] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.263203] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759263173) [2024-09-13 13:02:39.263215] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759063124, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.263240] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.263252] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.263260] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759263227) [2024-09-13 13:02:39.268981] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:39.269076] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.269445] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.269465] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.269471] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.269478] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.269489] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759269488, replica_locations:[]}) [2024-09-13 13:02:39.269503] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.269523] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.269533] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.269564] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.269605] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567146722, cache_obj->added_lc()=false, cache_obj->get_object_id()=926, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.270537] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.270897] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.270916] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.270922] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.270929] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.270938] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759270938, replica_locations:[]}) [2024-09-13 13:02:39.270983] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=43000, remain_us=1005846, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=43, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.271623] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C91-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.271897] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.271922] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.271930] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.271938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.271966] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=62, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:39.272856] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=10] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:39.272888] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=30][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:39.272895] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=7][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:39.272905] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:39.272911] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:39.272915] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:39.272921] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:39.272925] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:39.272929] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:39.272933] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:39.272937] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:39.272945] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=7][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:39.272949] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:39.272953] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:39.272964] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:39.272970] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.272976] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=6][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.272980] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:39.272986] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:39.272992] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:39.272998] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=6][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.273011] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:39.273024] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=11][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:39.273029] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:39.273032] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:39.273040] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:39.273048] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=8][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.273053] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:39.273061] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=7][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:39.273066] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:39.273071] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:39.273078] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=6][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e0060, start=1726203759272759, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:39.273087] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=9][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:39.273092] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:39.273136] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:39.273145] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=8][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:39.273150] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:39.273158] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=7][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:39.273163] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=3][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:39.273168] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:39.273172] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C91-0-0] [lt=4][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:39.288681] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=17] PNIO [ratelimit] time: 1726203759288680, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007624, add_bytes: 0 [2024-09-13 13:02:39.299796] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=13] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=4) [2024-09-13 13:02:39.314174] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.314555] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.314575] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.314582] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.314590] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.314604] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759314603, replica_locations:[]}) [2024-09-13 13:02:39.314618] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.314637] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:39.314655] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.314665] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.314683] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.314727] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567191843, cache_obj->added_lc()=false, cache_obj->get_object_id()=927, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.315914] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.316161] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.316181] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.316187] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.316194] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.316206] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759316205, replica_locations:[]}) [2024-09-13 13:02:39.316255] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=44000, remain_us=960574, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=44, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.323281] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=16] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:39.348293] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=19][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:39.348341] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:39.348355] INFO [STORAGE.TRANS] refresh_gts (ob_gts_source.cpp:520) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=13] refresh gts(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", tenant_id=1, need_refresh=false, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:39.348363] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:39.348353] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CEA-0-0] [lt=13][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203759348320}) [2024-09-13 13:02:39.349766] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=25] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:39.360479] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.360860] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.360890] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.360897] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.360907] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.360920] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759360919, replica_locations:[]}) [2024-09-13 13:02:39.360936] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.360958] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.360968] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.360997] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.361045] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567238161, cache_obj->added_lc()=false, cache_obj->get_object_id()=928, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.362056] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.362523] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.362545] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.362554] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.362566] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.362582] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759362581, replica_locations:[]}) [2024-09-13 13:02:39.362644] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=45000, remain_us=914185, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=45, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.363132] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE8-0-0] [lt=24][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759362712) [2024-09-13 13:02:39.363167] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE8-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203759362712}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:39.363234] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.363289] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=54][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.363301] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759363199) [2024-09-13 13:02:39.371472] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5D-0-0] [lt=12] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:39.371489] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5D-0-0] [lt=16][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203759370911], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:39.372098] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DED-0-0] [lt=9][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.372643] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DED-0-0] [lt=15][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203759372336, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035917, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203759372104}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:39.372668] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DED-0-0] [lt=25][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.407844] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.408237] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.408260] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.408267] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.408278] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.408293] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759408292, replica_locations:[]}) [2024-09-13 13:02:39.408307] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.408330] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.408336] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.408355] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.408401] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567285518, cache_obj->added_lc()=false, cache_obj->get_object_id()=929, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.409372] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.409701] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.409721] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.409727] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.409734] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.409743] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759409743, replica_locations:[]}) [2024-09-13 13:02:39.409792] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=46000, remain_us=867037, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=46, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.456024] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.456266] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169006A-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.456395] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.456418] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.456425] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.456461] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=34] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.456475] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759456474, replica_locations:[]}) [2024-09-13 13:02:39.456489] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.456516] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.456528] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.456569] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.456617] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567333734, cache_obj->added_lc()=false, cache_obj->get_object_id()=930, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.457654] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.458366] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.458391] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.458401] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.458416] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.458429] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759458428, replica_locations:[]}) [2024-09-13 13:02:39.458510] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=47000, remain_us=818318, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=47, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.459630] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E7-0-0] [lt=13][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.460206] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E7-0-0] [lt=12][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.460469] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E7-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.460918] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E7-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.461150] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E7-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.461604] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119DA03E3E7-0-0] [lt=6][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.463233] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.463269] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=23][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759463227) [2024-09-13 13:02:39.463286] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759263227, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.463312] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.463324] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.463333] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759463298) [2024-09-13 13:02:39.499887] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=16] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=3) [2024-09-13 13:02:39.505739] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.506208] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.506239] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.506248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.506258] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.506274] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759506272, replica_locations:[]}) [2024-09-13 13:02:39.506293] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.506321] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.506359] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=36][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.506398] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.506473] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567383587, cache_obj->added_lc()=false, cache_obj->get_object_id()=931, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.507498] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.507708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.507739] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.507745] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.507752] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.507761] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759507760, replica_locations:[]}) [2024-09-13 13:02:39.507816] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=48000, remain_us=769013, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=48, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.523597] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:39.526272] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20294][T1_L0_G0][T1][YB42AC103326-00062119DAF2902F-0-0] [lt=18][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:39.556059] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.556342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.556364] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.556372] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.556380] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.556395] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759556394, replica_locations:[]}) [2024-09-13 13:02:39.556410] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.556432] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.556449] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.556478] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.556526] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567433643, cache_obj->added_lc()=false, cache_obj->get_object_id()=932, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.557593] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.557818] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.557833] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.557844] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.557854] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.557864] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759557863, replica_locations:[]}) [2024-09-13 13:02:39.557928] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=49000, remain_us=718901, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=49, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.560897] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20290][T1_L0_G0][T1][YB42AC103326-00062119D8B51743-0-0] [lt=20][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:39.563293] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE9-0-0] [lt=6][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759562855) [2024-09-13 13:02:39.563305] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.563319] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.563327] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759563291) [2024-09-13 13:02:39.563323] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AE9-0-0] [lt=28][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203759562855}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:39.563343] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.563363] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759563338) [2024-09-13 13:02:39.563377] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759463298, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.563392] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:39.563409] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.563413] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.563417] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759563405) [2024-09-13 13:02:39.586296] WDIAG [SERVER] deliver_rpc_request (ob_srv_deliver.cpp:602) [19932][pnio1][T0][YB42AC103326-00062119EC0A11A1-0-0] [lt=10][errcode=-5150] can't deliver request(req={packet:{hdr_:{checksum_:823934577, pcode_:1316, hlen_:184, priority_:5, flags_:6151, tenant_id_:1001, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:1999154, timestamp:1726203759585945, dst_cluster_id:-1, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035937, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203759563216}, chid_:0, clen_:306, assemble:false, msg_count:0, payload:0}, type:0, group:0, sql_req_level:0, connection_phase:0, recv_timestamp_:1726203759586292, enqueue_timestamp_:0, request_arrival_time_:1726203759586292, trace_id_:Y0-0000000000000000-0-0}, ret=-5150) [2024-09-13 13:02:39.586352] WDIAG [SERVER] deliver (ob_srv_deliver.cpp:766) [19932][pnio1][T0][YB42AC103326-00062119EC0A11A1-0-0] [lt=42][errcode=-5150] deliver rpc request fail(&req=0x2b07d9a0a098, ret=-5150) [2024-09-13 13:02:39.607189] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.607520] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.607543] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.607549] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.607562] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.607575] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759607574, replica_locations:[]}) [2024-09-13 13:02:39.607590] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.607615] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.607621] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.607649] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.607696] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567484812, cache_obj->added_lc()=false, cache_obj->get_object_id()=933, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.608760] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.608987] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.609052] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=64][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.609059] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.609094] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=32] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.609113] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759609112, replica_locations:[]}) [2024-09-13 13:02:39.609184] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=50000, remain_us=667644, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=50, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.629756] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=52] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:39.659421] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.659740] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.659780] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=40][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.659788] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.659800] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.659813] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759659812, replica_locations:[]}) [2024-09-13 13:02:39.659829] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.659852] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.659861] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.659894] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.659943] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567537060, cache_obj->added_lc()=false, cache_obj->get_object_id()=934, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.661041] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.661266] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.661284] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.661291] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.661299] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.661308] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759661307, replica_locations:[]}) [2024-09-13 13:02:39.661371] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=51000, remain_us=615458, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=51, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.663409] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.663433] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759663403) [2024-09-13 13:02:39.663450] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759563390, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.663468] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.663474] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.663479] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759663456) [2024-09-13 13:02:39.699989] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=20] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=2) [2024-09-13 13:02:39.712736] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.713011] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.713042] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.713049] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.713060] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.713075] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759713073, replica_locations:[]}) [2024-09-13 13:02:39.713094] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.713116] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.713125] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.713154] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.713205] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567590322, cache_obj->added_lc()=false, cache_obj->get_object_id()=935, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.714280] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=36][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.714566] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.714586] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.714593] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.714604] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.714613] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759714612, replica_locations:[]}) [2024-09-13 13:02:39.714668] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=52000, remain_us=562161, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=52, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.723985] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=25] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:39.728804] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:39.729013] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=21] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=1, size_used=0, mem_used=16637952) [2024-09-13 13:02:39.763489] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.763519] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759763481) [2024-09-13 13:02:39.763530] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759663456, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.763554] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.763561] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.763570] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759763538) [2024-09-13 13:02:39.763709] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEA-0-0] [lt=20][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759763002) [2024-09-13 13:02:39.763757] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.763764] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.763744] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEA-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203759763002}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:39.763768] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759763754) [2024-09-13 13:02:39.766907] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.767292] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.767311] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.767318] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.767326] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.767339] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759767338, replica_locations:[]}) [2024-09-13 13:02:39.767353] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.767378] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.767388] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.767412] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.767469] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567644586, cache_obj->added_lc()=false, cache_obj->get_object_id()=936, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.768637] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.768905] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.768942] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=36][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.768952] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.768967] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.768981] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759768981, replica_locations:[]}) [2024-09-13 13:02:39.769035] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=53000, remain_us=507793, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=53, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.822260] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.822590] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.822610] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.822618] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.822629] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.822652] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759822651, replica_locations:[]}) [2024-09-13 13:02:39.822668] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.822689] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.822696] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.822723] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.822792] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567699886, cache_obj->added_lc()=false, cache_obj->get_object_id()=937, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.823998] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.824302] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.824329] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.824338] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.824348] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.824360] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759824359, replica_locations:[]}) [2024-09-13 13:02:39.824409] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=54000, remain_us=452419, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=54, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.836372] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20291][T1_L0_G0][T1][YB42AC103326-00062119D8D56B5D-0-0] [lt=56][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:39.848800] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:39.863841] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:39.863868] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=26][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759863825) [2024-09-13 13:02:39.863889] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759763537, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:39.863910] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.863921] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.863927] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759863896) [2024-09-13 13:02:39.871794] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5E-0-0] [lt=9] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:39.871812] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5E-0-0] [lt=17][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203759871350], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:39.872436] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEE-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.872901] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.873063] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEE-0-0] [lt=34][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203759872720, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035952, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203759872192}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:39.873111] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEE-0-0] [lt=48][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:39.873165] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=14] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.873996] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=15] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:39.878666] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.879002] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.879024] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.879031] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.879040] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.879055] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759879054, replica_locations:[]}) [2024-09-13 13:02:39.879070] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.879094] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.879103] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.879123] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.879170] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567756288, cache_obj->added_lc()=false, cache_obj->get_object_id()=938, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.880286] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.880532] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.880553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.880559] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.880568] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.880578] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759880578, replica_locations:[]}) [2024-09-13 13:02:39.880630] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0] will sleep(sleep_us=55000, remain_us=396199, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=55, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.900106] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=35] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=1) [2024-09-13 13:02:39.924307] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:39.929622] INFO [COMMON] clean_garbage_node (ob_kvcache_map.cpp:635) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=27] Cache wash clean map node details(ret=0, clean_node_count=0, clean_time=5286, clean_start_pos=1384119, clean_num=125829) [2024-09-13 13:02:39.935906] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.936207] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.936230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.936237] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.936248] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.936262] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759936261, replica_locations:[]}) [2024-09-13 13:02:39.936277] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.936313] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.936322] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.936350] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.936400] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567813515, cache_obj->added_lc()=false, cache_obj->get_object_id()=939, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.937421] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.937787] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.937808] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.937814] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.937823] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.937831] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759937830, replica_locations:[]}) [2024-09-13 13:02:39.937889] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=1] will sleep(sleep_us=56000, remain_us=338939, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=56, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:39.943117] WDIAG [SHARE] refresh (ob_task_define.cpp:402) [19886][LogLimiterRefre][T0][Y0-0000000000000000-0-0] [lt=21][errcode=0] Throttled WDIAG logs in last second(details {error code, dropped logs, earliest tid}=[{errcode:-4721, dropped:513, tid:19945}]) [2024-09-13 13:02:39.963597] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEB-0-0] [lt=27][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203759963163) [2024-09-13 13:02:39.963631] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEB-0-0] [lt=27][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203759963163}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:39.963677] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.963695] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:39.963704] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203759963650) [2024-09-13 13:02:39.994167] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.994505] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.994531] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.994539] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.994552] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.994567] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759994566, replica_locations:[]}) [2024-09-13 13:02:39.994583] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:39.994602] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:56, local_retry_times:56, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:39.994620] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:39.994627] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:39.994636] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.994644] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:39.994647] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:39.994661] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:39.994682] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:39.994733] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567871849, cache_obj->added_lc()=false, cache_obj->get_object_id()=940, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:39.995769] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.995808] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=38][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.995974] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:39.996141] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.996165] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:39.996172] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:39.996183] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:39.996196] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203759996195, replica_locations:[]}) [2024-09-13 13:02:39.996210] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.996220] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:39.996229] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:39.996241] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:39.996250] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:39.996257] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:39.996271] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:39.996282] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.996289] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:39.996294] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:39.996301] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:39.996307] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:39.996317] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:39.996326] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:39.996331] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:39.996338] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:39.996342] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:39.996349] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:39.996367] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:39.996382] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:39.996391] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:39.996400] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:39.996407] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:39.996415] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:39.996426] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=57, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:39.996467] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8] will sleep(sleep_us=57000, remain_us=280362, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=57, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:40.053720] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.054132] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.054154] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.054161] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.054170] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.054183] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760054182, replica_locations:[]}) [2024-09-13 13:02:40.054203] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.054229] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:57, local_retry_times:57, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:40.054249] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.054258] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.054268] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.054275] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.054279] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:40.054316] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:40.054327] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.054382] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567931496, cache_obj->added_lc()=false, cache_obj->get_object_id()=941, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.055416] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.055458] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=41][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.055572] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.055860] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.055897] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.055906] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.055915] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.055928] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760055927, replica_locations:[]}) [2024-09-13 13:02:40.055948] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.055963] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.055973] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.055991] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:40.055999] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:40.056011] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:40.056031] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:40.056041] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.056047] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.056085] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=36][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:40.056089] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:40.056095] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:40.056102] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.056111] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:40.056116] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:40.056120] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:40.056124] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:40.056129] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:40.056136] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:40.056148] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=3][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:40.056156] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.056162] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:40.056169] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:40.056175] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=4][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:40.056182] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=58, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:40.056202] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11] will sleep(sleep_us=58000, remain_us=220627, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=58, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:40.063692] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEC-0-0] [lt=28][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760063253) [2024-09-13 13:02:40.063726] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEC-0-0] [lt=31][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760063253}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.063743] WDIAG [STORAGE.TRANS] get_cluster_service_master_ (ob_tenant_weak_read_service.cpp:291) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4076] tenant schema is not ready, need wait(ret=-4076, ret="OB_NEED_WAIT", superior_tenant_id=1) [2024-09-13 13:02:40.063766] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=21][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.063799] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760063735) [2024-09-13 13:02:40.063845] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=45][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203759863896, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.063894] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=10][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.063914] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.063921] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760063859) [2024-09-13 13:02:40.072954] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20326][T1_L0_G19][T1][YB42AC103326-00062119D94365E3-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.092918] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20042][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=16] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.093450] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20041][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=12] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.093886] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20043][MysqlIO][T0][Y0-0000000000000000-0-0] [lt=18] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.094430] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20037][BatchIO][T0][Y0-0000000000000000-0-0] [lt=13] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.094647] INFO [RPC.FRAME] mysql_easy_timer_cb (ob_net_easy.cpp:676) [20045][MysqlUnix][T0][Y0-0000000000000000-0-0] [lt=13] [MYSQL EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.094797] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20047][RpcUnix][T0][Y0-0000000000000000-0-0] [lt=16] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.094944] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20036][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.095081] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20039][BatchIO][T0][Y0-0000000000000000-0-0] [lt=10] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.095113] INFO [RPC.FRAME] batch_rpc_easy_timer_cb (ob_net_easy.cpp:641) [20038][BatchIO][T0][Y0-0000000000000000-0-0] [lt=15] [BATCH_RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.100227] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=31] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=0) [2024-09-13 13:02:40.114499] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.114870] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.114910] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.114921] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.114933] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.114954] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760114952, replica_locations:[]}) [2024-09-13 13:02:40.114976] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.115002] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:58, local_retry_times:58, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:40.115026] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.115038] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.115053] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.115064] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.115070] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:40.115088] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:40.115103] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.115164] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6567992277, cache_obj->added_lc()=false, cache_obj->get_object_id()=942, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.116427] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.116475] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=47][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.116601] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.116924] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.116945] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.116955] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.116967] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.116983] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760116982, replica_locations:[]}) [2024-09-13 13:02:40.117002] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.117016] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.117026] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.117043] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:40.117051] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:40.117059] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:40.117077] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:40.117088] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.117099] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.117110] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:40.117120] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:40.117127] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:40.117140] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.117164] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:40.117175] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:40.117181] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=5][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:40.117187] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=6][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:40.117197] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:40.117205] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=7][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:40.117223] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=8][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:40.117236] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.117247] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:40.117260] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=10][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:40.117272] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:40.117283] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=9][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=59, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:40.117307] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] will sleep(sleep_us=59000, remain_us=159522, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=59, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:40.119978] INFO do_work (ob_rl_mgr.cpp:709) [19942][rl_mgr0][T0][Y0-0000000000000000-0-0] [lt=14] swc wakeup.(stat_period_=1000000, ready=false) [2024-09-13 13:02:40.129960] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=19] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:40.140840] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20292][T1_L0_G0][T1][YB42AC103326-00062119D6A3A329-0-0] [lt=14][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.141930] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20320][T1_L0_G10][T1][YB42AC103326-00062119ED62FC89-0-0] [lt=20][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:40.158576] INFO [SQL.EXE] run2 (ob_maintain_dependency_info_task.cpp:227) [19986][MaintainDepInfo][T0][Y0-0000000000000000-0-0] [lt=15] [ASYNC TASK QUEUE](queue_.size()=0, sys_view_consistent_.size()=0) [2024-09-13 13:02:40.163845] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.163888] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=42][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.163906] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760163823) [2024-09-13 13:02:40.177826] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.178466] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.178510] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=42][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.178531] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.178551] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.178575] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760178574, replica_locations:[]}) [2024-09-13 13:02:40.178602] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.178632] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:59, local_retry_times:59, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:40.178659] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.178674] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.178695] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.178715] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.178728] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:40.178751] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:40.178770] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.178835] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568055948, cache_obj->added_lc()=false, cache_obj->get_object_id()=943, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.180117] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.180172] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=53][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.180306] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.180562] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.180591] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.180609] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.180628] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.180649] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760180647, replica_locations:[]}) [2024-09-13 13:02:40.180686] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=34][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.180704] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.180720] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.180742] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:40.180755] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:40.180770] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:40.180792] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:40.180810] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.180824] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.180838] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:40.180851] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:40.180863] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:40.180900] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=34][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.180921] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:40.180938] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:40.180950] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:40.180962] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:40.180974] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:40.180988] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:40.181008] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:40.181023] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.181036] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:40.181050] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:40.181065] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:40.181078] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=60, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:40.181105] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] will sleep(sleep_us=60000, remain_us=95724, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=60, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:40.207496] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20288][T1_L0_G0][T1][YB42AC103326-00062119D9A3A431-0-0] [lt=22][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.209738] INFO [PALF] try_recycle_blocks (palf_env_impl.cpp:788) [20120][T1_PalfGC][T1][Y0-0000000000000000-0-0] [lt=25] LOG_DISK_OPTION(disk_options_wrapper_={disk_opts_for_stopping_writing:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, disk_opts_for_recycling_blocks:{log_disk_size(MB):0, log_disk_utilization_threshold(%):80, log_disk_utilization_limit_threshold(%):95, log_disk_throttling_percentage(%):60, log_disk_throttling_maximum_duration(s):7200, log_writer_parallelism:3}, status:1, cur_unrecyclable_log_disk_size(MB):0, sequence:1}) [2024-09-13 13:02:40.219855] INFO eloop_run (eloop.c:144) [19930][pnio1][T0][Y0-0000000000000000-0-0] [lt=25] PNIO [ratelimit] time: 1726203760219851, bytes: 5021493, bw: 0.058785 MB/s, add_ts: 1007630, add_bytes: 62111 [2024-09-13 13:02:40.220578] INFO [STORAGE] gc_tables_in_queue (ob_tenant_meta_mem_mgr.cpp:565) [20207][T1_TenantMetaMe][T1][Y0-0000000000000000-0-0] [lt=36] gc tables in queue: recycle 0 table(ret=0, tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl3788EEE", sizeof(T):3824, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, large_tablet_buffer_pool_={typeid(T).name():"N9oceanbase7storage15ObMetaObjBufferINS0_8ObTabletELl65448EEE", sizeof(T):65480, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, full_tablet_creator_={tiny_allocator_.used():0, tiny_allocator_.total():0, full allocator used:0, full allocator total:0}, tablets_mem=0, tablets_mem_limit=644245080, ddl_kv_pool_={typeid(T).name():"N9oceanbase7storage7ObDDLKVE", sizeof(T):2368, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, memtable_pool_={typeid(T).name():"N9oceanbase8memtable10ObMemtableE", sizeof(T):1984, used_obj_cnt:0, free_obj_hold_cnt:0, allocator used:0, allocator total:0}, pending_cnt=0, wait_gc_count=0, tablet count=0) [2024-09-13 13:02:40.220929] INFO [COORDINATOR] detect_recover (ob_failure_detector.cpp:142) [20111][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=33] doing detect recover operation(events_with_ops=[{event:{type:SCHEMA NOT REFRESHED, module:SCHEMA, info:schema not refreshed, level:SERIOUS}}]) [2024-09-13 13:02:40.224107] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9216782EE-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.228910] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:40.229061] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=22] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:40.229825] INFO [ARCHIVE] gc_stale_ls_task_ (ob_ls_mgr.cpp:539) [20253][T1_LSArchiveMgr][T1][YB42AC103323-000621F920C60C7D-0-0] [lt=16] gc stale ls task succ [2024-09-13 13:02:40.230251] INFO [STORAGE] runTimerTask (ob_checkpoint_service.cpp:346) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=8] ====== check clog disk timer task ====== [2024-09-13 13:02:40.230304] INFO [PALF] get_disk_usage (palf_env_impl.cpp:891) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=51] get_disk_usage(ret=0, capacity(MB):=0, used(MB):=0) [2024-09-13 13:02:40.230321] INFO [STORAGE] cannot_recycle_log_over_threshold_ (ob_checkpoint_service.cpp:260) [20250][T1_CKClogDisk][T1][Y0-0000000000000000-0-0] [lt=9] cannot_recycle_log_size statistics(cannot_recycle_log_size=0, threshold=0, need_update_checkpoint_scn=false) [2024-09-13 13:02:40.235167] INFO [STORAGE] scheduler_ls_ha_handler_ (ob_storage_ha_service.cpp:196) [20263][T1_HAService][T1][Y0-0000000000000000-0-0] [lt=20] start do ls ha handler(ls_id_array_=[]) [2024-09-13 13:02:40.239837] WDIAG load_file_to_string (utility.h:662) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=4][errcode=0] read /sys/class/net/eth0/speed failed, errno 22 [2024-09-13 13:02:40.239859] WDIAG get_ethernet_speed (utility.cpp:580) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=18][errcode=-4000] load file /sys/class/net/eth0/speed failed, ret -4000 [2024-09-13 13:02:40.239867] WDIAG [SERVER] get_network_speed_from_sysfs (ob_server.cpp:2807) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=6][errcode=-4000] cannot get Ethernet speed, use default(tmp_ret=0, devname="eth0") [2024-09-13 13:02:40.239899] WDIAG [SERVER] runTimerTask (ob_server.cpp:3341) [19878][ServerGTimer][T0][Y0-0000000000000000-0-0] [lt=30][errcode=-4000] ObRefreshNetworkSpeedTask reload bandwidth throttle limit failed(ret=-4000, ret="OB_ERROR") [2024-09-13 13:02:40.241429] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.241805] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.241839] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=33][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.241856] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.241889] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.241919] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760241917, replica_locations:[]}) [2024-09-13 13:02:40.241944] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.241973] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:60, local_retry_times:60, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:40.242000] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.242016] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.242034] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.242048] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.242061] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:40.242093] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:40.242117] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=23][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.242181] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568119294, cache_obj->added_lc()=false, cache_obj->get_object_id()=944, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.243392] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.243457] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=63][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.243603] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C7F-0-0] [lt=37][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.243857] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.243904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=45][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.243920] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.243938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.243957] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760243956, replica_locations:[]}) [2024-09-13 13:02:40.243980] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.243998] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.244020] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.244042] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:40.244056] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:40.244070] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:40.244091] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:40.244107] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.244122] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.244136] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:40.244148] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:40.244160] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:40.244192] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=29][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.244207] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:40.244220] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:40.244232] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:40.244243] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:40.244278] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=34][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:40.244292] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:40.244313] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:40.244328] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.244342] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:40.244355] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:40.244370] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:40.244383] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=61, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:40.244426] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17] will sleep(sleep_us=32403, remain_us=32403, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=61, v.err_=-4721, timeout_timestamp=1726203760276828) [2024-09-13 13:02:40.252334] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.252751] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.253518] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.253929] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.254275] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103324-000621F9212782D9-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.263902] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=14][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.263932] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=28][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760263893) [2024-09-13 13:02:40.263943] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203760063856, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.263943] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AED-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760263396) [2024-09-13 13:02:40.263968] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.263975] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.263979] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760263951) [2024-09-13 13:02:40.263965] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AED-0-0] [lt=21][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760263396}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.263992] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.263996] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.264000] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760263988) [2024-09-13 13:02:40.273290] WDIAG [STORAGE] get_ls (ob_ls_service.cpp:1062) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7][errcode=-4719] get log stream fail(ret=-4719, ls_id={id:1}) [2024-09-13 13:02:40.273482] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921760C92-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.273812] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.273830] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.273837] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.273846] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.273891] WDIAG [SQL] create_sessid (ob_sql_session_mgr.cpp:409) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=8][errcode=0] server is initiating(server_id=0, local_seq=63, max_local_seq=262143, max_server_id=4095) [2024-09-13 13:02:40.274852] INFO [SQL.RESV] check_table_exist_or_not (ob_dml_resolver.cpp:7588) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=12] table not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, table_name.ptr()="data_size:19, data:5F5F616C6C5F6C735F6D6574615F7461626C65", ret=-5019) [2024-09-13 13:02:40.274888] WDIAG [SQL.RESV] resolve_table_relation_recursively (ob_dml_resolver.cpp:7546) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=34][errcode=-5019] synonym not exist(tenant_id=1, database_id=201001, table_name=__all_ls_meta_table, ret=-5019) [2024-09-13 13:02:40.274901] WDIAG [SQL.RESV] resolve_table_relation_factor_normal (ob_dml_resolver.cpp:7376) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=11][errcode=-5019] fail to resolve table relation recursively(tenant_id=1, ret=-5019, database_id=201001, database_id=201001, table_name=__all_ls_meta_table, db_name=oceanbase) [2024-09-13 13:02:40.274911] WDIAG [SQL.RESV] resolve_table_relation_factor (ob_dml_resolver.cpp:7219) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=9][errcode=-5019] resolve table relation factor failed(ret=-5019, table_name=__all_ls_meta_table) [2024-09-13 13:02:40.274918] WDIAG [SQL.RESV] inner_resolve_sys_view (ob_dml_resolver.cpp:2165) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=5][errcode=-5019] fail to resolve table(ret=-5019) [2024-09-13 13:02:40.274926] WDIAG [SQL.RESV] resolve_table_relation_factor_wrapper (ob_dml_resolver.cpp:2220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7][errcode=-5019] fail to resolve sys view(ret=-5019) [2024-09-13 13:02:40.274932] WDIAG resolve_basic_table_without_cte (ob_dml_resolver.cpp:2316) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] Table 'oceanbase.__all_ls_meta_table' doesn't exist [2024-09-13 13:02:40.274940] WDIAG [SQL.RESV] resolve_basic_table_with_cte (ob_dml_resolver.cpp:13351) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6][errcode=-5019] resolve base or alias table factor failed(ret=-5019) [2024-09-13 13:02:40.274944] WDIAG [SQL.RESV] resolve_basic_table (ob_dml_resolver.cpp:13279) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=3][errcode=-5019] fail to resolve basic table with cte(ret=-5019) [2024-09-13 13:02:40.274948] WDIAG [SQL.RESV] resolve_table (ob_dml_resolver.cpp:2736) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=3][errcode=-5019] resolve basic table failed(ret=-5019) [2024-09-13 13:02:40.274953] WDIAG [SQL.RESV] resolve_from_clause (ob_select_resolver.cpp:3698) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=5][errcode=-5019] fail to exec resolve_table(*table_node, table_item)(ret=-5019) [2024-09-13 13:02:40.274959] WDIAG [SQL.RESV] resolve_normal_query (ob_select_resolver.cpp:1110) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6][errcode=-5019] fail to exec resolve_from_clause(parse_tree.children_[PARSE_SELECT_FROM])(ret=-5019) [2024-09-13 13:02:40.274964] WDIAG [SQL.RESV] resolve (ob_select_resolver.cpp:1314) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] resolve normal query failed(ret=-5019) [2024-09-13 13:02:40.274967] WDIAG [SQL.RESV] select_stmt_resolver_func (ob_resolver.cpp:188) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=3][errcode=-5019] execute stmt_resolver failed(ret=-5019, parse_tree.type_=3299) [2024-09-13 13:02:40.274980] WDIAG [SQL] generate_stmt (ob_sql.cpp:3033) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7][errcode=-5019] failed to resolve(ret=-5019) [2024-09-13 13:02:40.274987] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3154) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6][errcode=-5019] Failed to generate stmt(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.274994] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=5][errcode=-5019] Failed to generate plan(ret=-5019, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.275001] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6][errcode=-5019] fail to handle physical plan(ret=-5019) [2024-09-13 13:02:40.275006] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] fail to handle text query(stmt=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port, ret=-5019) [2024-09-13 13:02:40.275014] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6][errcode=-5019] executor execute failed(ret=-5019) [2024-09-13 13:02:40.275018] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] execute failed(ret=-5019, tenant_id=1, executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, retry_cnt=0, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:40.275032] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=10][errcode=-5019] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:0, local_retry_times:0, err_:-5019, err_:"OB_TABLE_NOT_EXIST", retry_type:0, client_ret:-5019}, need_retry=false) [2024-09-13 13:02:40.275048] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=12][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:40.275052] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] result set close failed(ret=-5019) [2024-09-13 13:02:40.275056] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=3][errcode=-5019] failed to close result(close_ret=-5019, ret=-5019) [2024-09-13 13:02:40.275066] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=3][errcode=-5019] failed to process record(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, record_ret=-5019, ret=-5019) [2024-09-13 13:02:40.275075] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.275079] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] failed to process final(executor={ObIExecutor:, sql:"SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port"}, aret=-5019, ret=-5019) [2024-09-13 13:02:40.275084] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=5][errcode=-5019] execute sql failed(ret=-5019, tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:40.275090] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=6][errcode=-5019] retry_while_no_tenant_resource failed(ret=-5019, tenant_id=1) [2024-09-13 13:02:40.275095] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] execute_read failed(ret=-5019, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:40.275103] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:131) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7][errcode=-5019] query failed(ret=-5019, conn=0x2b07a13e03a0, start=1726203760274701, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:40.275111] WDIAG [COMMON.MYSQLP] read (ob_mysql_proxy.cpp:66) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=8][errcode=-5019] read failed(ret=-5019) [2024-09-13 13:02:40.275116] WDIAG [SHARE.PT] get_by_tenant (ob_persistent_ls_table.cpp:639) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=3][errcode=-5019] execute sql failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, sql=SELECT * FROM __all_ls_meta_table WHERE tenant_id = 1 ORDER BY tenant_id, ls_id, svr_ip, svr_port) [2024-09-13 13:02:40.275171] WDIAG [SHARE.PT] get_by_tenant (ob_ls_table_operator.cpp:252) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=8][errcode=-5019] get all ls info by persistent_ls_ failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1) [2024-09-13 13:02:40.275181] WDIAG [SHARE] inner_open_ (ob_ls_table_iterator.cpp:104) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=10][errcode=-5019] fail to get ls infos by tenant(ret=-5019, ret="OB_TABLE_NOT_EXIST", tenant_id=1, inner_table_only=true) [2024-09-13 13:02:40.275187] WDIAG [SHARE] next (ob_ls_table_iterator.cpp:71) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=5][errcode=-5019] fail to open iterator(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:40.275192] WDIAG [SERVER] build_replica_map_ (ob_tenant_meta_checker.cpp:334) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] ls table iterator next failed(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:40.275197] WDIAG [SERVER] check_ls_table_ (ob_tenant_meta_checker.cpp:214) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] build replica map from ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:40.275205] WDIAG [SERVER] check_ls_table (ob_tenant_meta_checker.cpp:194) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=7][errcode=-5019] check ls table failed(ret=-5019, ret="OB_TABLE_NOT_EXIST", mode=1) [2024-09-13 13:02:40.275209] WDIAG [SERVER] runTimerTask (ob_tenant_meta_checker.cpp:44) [20235][T1_LSMetaCh][T1][YB42AC103323-000621F921760C92-0-0] [lt=4][errcode=-5019] fail to check ls meta table(ret=-5019, ret="OB_TABLE_NOT_EXIST") [2024-09-13 13:02:40.276961] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=37][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203760276830, ctx_timeout_ts=1726203760276830, worker_timeout_ts=1726203760276828, default_timeout=1000000) [2024-09-13 13:02:40.277009] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=46][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:40.277026] WDIAG [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:429) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4012] batch renew cache failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, tablet_list=[{id:1}], ls_ids=[{id:1}]) [2024-09-13 13:02:40.277048] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.277071] WDIAG [SQL.DAS] force_refresh_location_cache (ob_das_location_router.cpp:1212) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4012] batch renew tablet locations failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, err_no=-4721, is_nonblock=false, failed_list=[{id:1}]) [2024-09-13 13:02:40.277090] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:61, local_retry_times:61, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:1, client_ret:-4721}, need_retry=true) [2024-09-13 13:02:40.277113] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.277129] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.277146] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.277159] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.277186] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4721] failed to close result(close_ret=-4721, ret=-4721) [2024-09-13 13:02:40.277209] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4721, ret=-4721) [2024-09-13 13:02:40.277227] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.277284] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568154398, cache_obj->added_lc()=false, cache_obj->get_object_id()=945, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.278534] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=18][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.278584] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=50][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.278605] WDIAG [SHARE] set_default_timeout_ctx (ob_share_util.cpp:153) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4012] timeouted(ret=-4012, ret="OB_TIMEOUT", abs_timeout_ts=1726203760276828, ctx_timeout_ts=1726203760276828, worker_timeout_ts=1726203760276828, default_timeout=1000000) [2024-09-13 13:02:40.278622] WDIAG [SHARE.LOCATION] batch_renew_ls_locations (ob_ls_location_service.cpp:1236) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4012] fail to set default_timeout_ctx(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:40.278638] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1005) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4012] batch renew ls locations failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_ids=[{id:1}]) [2024-09-13 13:02:40.278655] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4012] renew location failed(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.278670] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4012] fail to get log stream location(ret=-4012, ret="OB_TIMEOUT", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.278690] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4012] failed to get location(ls_id={id:1}, ret=-4012) [2024-09-13 13:02:40.278704] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] block renew tablet location failed(tmp_ret=-4012, tmp_ret="OB_TIMEOUT", tablet_id={id:1}) [2024-09-13 13:02:40.278719] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:40.278739] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=20][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:40.278756] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.278778] WDIAG [SQL.JO] compute_table_location (ob_join_order.cpp:299) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.278791] WDIAG [SQL.JO] compute_base_table_property (ob_join_order.cpp:7903) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to calc table location(ret=-4721) [2024-09-13 13:02:40.278807] WDIAG [SQL.JO] generate_base_table_paths (ob_join_order.cpp:7850) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4721] failed to compute base path property(ret=-4721) [2024-09-13 13:02:40.278830] WDIAG [SQL.JO] generate_normal_base_table_paths (ob_join_order.cpp:7836) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=22][errcode=-4721] failed to generate access paths(ret=-4721) [2024-09-13 13:02:40.278845] WDIAG [SQL.OPT] generate_plan_tree (ob_log_plan.cpp:7710) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] failed to generate the access path for the single-table query(ret=-4721, get_optimizer_context().get_query_ctx()->get_sql_stmt()=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.278860] WDIAG [SQL.OPT] generate_raw_plan_for_plain_select (ob_select_log_plan.cpp:4334) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4721] failed to generate plan tree for plain select(ret=-4721) [2024-09-13 13:02:40.278872] WDIAG [SQL.OPT] generate_raw_plan (ob_log_plan.cpp:11977) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to generate normal raw plan(ret=-4721) [2024-09-13 13:02:40.278900] WDIAG [SQL.OPT] generate_plan (ob_log_plan.cpp:11935) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=27][errcode=-4721] fail to generate raw plan(ret=-4721) [2024-09-13 13:02:40.278912] WDIAG [SQL.OPT] optimize (ob_optimizer.cpp:64) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to perform optimization(ret=-4721) [2024-09-13 13:02:40.278925] WDIAG [SQL] optimize_stmt (ob_sql.cpp:3764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to optimize logical plan(ret=-4721) [2024-09-13 13:02:40.278938] WDIAG [SQL] generate_plan (ob_sql.cpp:3399) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to optimizer stmt(ret=-4721) [2024-09-13 13:02:40.278959] WDIAG [SQL] generate_physical_plan (ob_sql.cpp:3188) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] failed to generate plan(ret=-4721) [2024-09-13 13:02:40.278974] WDIAG [SQL] handle_physical_plan (ob_sql.cpp:5029) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] Failed to generate plan(ret=-4721, result.get_exec_context().need_disconnect()=false) [2024-09-13 13:02:40.278987] WDIAG [SQL] handle_text_query (ob_sql.cpp:2725) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to handle physical plan(ret=-4721) [2024-09-13 13:02:40.279001] WDIAG [SQL] stmt_query (ob_sql.cpp:229) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] fail to handle text query(stmt=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name, ret=-4721) [2024-09-13 13:02:40.279016] WDIAG [SERVER] do_query (ob_inner_sql_connection.cpp:764) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4721] executor execute failed(ret=-4721) [2024-09-13 13:02:40.279028] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4721] execute failed(ret=-4721, tenant_id=1, executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, retry_cnt=62, local_sys_schema_version=1, local_tenant_schema_version=1) [2024-09-13 13:02:40.279047] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4012] [RETRY] check if need retry(v={force_local_retry:true, stmt_retry_times:62, local_retry_times:62, err_:-4721, err_:"OB_LS_LOCATION_NOT_EXIST", retry_type:0, client_ret:-4012}, need_retry=false) [2024-09-13 13:02:40.279069] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.279082] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.279121] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=true, tablet_list=[{id:1}], ls_ids=[], error_code=-4721) [2024-09-13 13:02:40.279145] WDIAG [SERVER] inner_close (ob_inner_sql_result.cpp:220) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=21][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.279158] WDIAG [SERVER] force_close (ob_inner_sql_result.cpp:200) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4721] result set close failed(ret=-4721) [2024-09-13 13:02:40.279170] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:927) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=11][errcode=-4012] failed to close result(close_ret=-4721, ret=-4012) [2024-09-13 13:02:40.279196] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:957) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4012] failed to process record(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, record_ret=-4012, ret=-4012) [2024-09-13 13:02:40.279213] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.279228] INFO [SERVER] process_final (ob_inner_sql_connection.cpp:743) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=12] slow inner sql(last_ret=-4012, sql={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, process_time=2001652) [2024-09-13 13:02:40.279243] WDIAG [SERVER] query (ob_inner_sql_connection.cpp:993) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4012] failed to process final(executor={ObIExecutor:, sql:"SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name"}, aret=-4012, ret=-4012) [2024-09-13 13:02:40.279258] WDIAG [SERVER] execute_read_inner (ob_inner_sql_connection.cpp:1786) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4012] execute sql failed(ret=-4012, tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.279272] WDIAG [SERVER] retry_while_no_tenant_resource (ob_inner_sql_connection.cpp:1054) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4012] retry_while_no_tenant_resource failed(ret=-4012, tenant_id=1) [2024-09-13 13:02:40.279285] WDIAG [SERVER] execute_read (ob_inner_sql_connection.cpp:1726) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4012] execute_read failed(ret=-4012, cluster_id=1726203323, tenant_id=1) [2024-09-13 13:02:40.279299] WDIAG [COMMON.MYSQLP] read_without_check_sys_variable (ob_sql_client_decorator.cpp:119) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4012] failed to read without check sys variable(ret=-4012, sql="SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name", tenant_id=1, check_sys_variable=false, snapshot_timestamp=-1) [2024-09-13 13:02:40.279322] WDIAG [SHARE] load (ob_core_table_proxy.cpp:436) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=19][errcode=-4012] execute sql failed(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, sql=SELECT row_id, column_name, column_value FROM __all_core_table WHERE table_name = '__all_global_stat' ORDER BY row_id, column_name) [2024-09-13 13:02:40.279371] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568156489, cache_obj->added_lc()=false, cache_obj->get_object_id()=946, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f2d4fa 0x24f2d4af 0x2517b731 0x25186ba2 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.279458] WDIAG [SHARE] load (ob_core_table_proxy.cpp:368) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=17][errcode=-4012] load failed(ret=-4012, for_update=false) [2024-09-13 13:02:40.279475] WDIAG [SHARE] get (ob_global_stat_proxy.cpp:442) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=16][errcode=-4012] core_table load failed(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:40.279489] WDIAG [SHARE] get_baseline_schema_version (ob_global_stat_proxy.cpp:406) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4012] get failed(ret=-4012) [2024-09-13 13:02:40.279502] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_schema_service_sql_impl.cpp:884) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=12][errcode=-4012] get_baseline_schema_version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:40.279519] WDIAG [SHARE.SCHEMA] get_baseline_schema_version (ob_multi_version_schema_service.cpp:4601) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4012] get baseline schema version failed(ret=-4012, ret="OB_TIMEOUT", schema_status={tenant_id:1, snapshot_timestamp:-1, readable_schema_version:-1}) [2024-09-13 13:02:40.279534] WDIAG [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2861) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=-4012] fail to get baseline schema version(ret=-4012, ret="OB_TIMEOUT", tenant_id=1) [2024-09-13 13:02:40.279547] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2900) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=12] [REFRESH_SCHEMA] end refresh and add schema by tenant(ret=-4012, ret="OB_TIMEOUT", tenant_id=1, cost=2002720) [2024-09-13 13:02:40.279562] WDIAG [SHARE.SCHEMA] operator() (ob_multi_version_schema_service.cpp:2565) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=14][errcode=0] fail to refresh tenant schema(tmp_ret=-4012, tenant_id=1) [2024-09-13 13:02:40.279576] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2583) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=13] [REFRESH_SCHEMA] end refresh and add schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1], cost=2002754) [2024-09-13 13:02:40.279592] WDIAG [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:402) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=15][errcode=-4012] fail to refresh schema(ret=-4012, ret="OB_TIMEOUT", tenant_ids=[1]) [2024-09-13 13:02:40.279619] INFO [SERVER] process_async_refresh_tasks (ob_server_schema_updater.cpp:406) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=27] try to async refresh schema(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:40.279632] WDIAG [SERVER] batch_process_tasks (ob_server_schema_updater.cpp:238) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C7F-0-0] [lt=13][errcode=-4012] fail to process async refresh tasks(ret=-4012, ret="OB_TIMEOUT") [2024-09-13 13:02:40.279651] WDIAG [SERVER] batch_process_tasks (ob_uniq_task_queue.h:505) [19945][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=16][errcode=-4012] fail to batch process task(ret=-4012) [2024-09-13 13:02:40.279665] WDIAG [SERVER] run1 (ob_uniq_task_queue.h:456) [19945][SerScheQueue1][T0][Y0-0000000000000000-0-0] [lt=13][errcode=-4012] fail to batch execute task(ret=-4012, tasks.count()=1) [2024-09-13 13:02:40.279695] INFO [SHARE.SCHEMA] refresh_and_add_schema (ob_multi_version_schema_service.cpp:2476) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C80-0-0] [lt=12] [REFRESH_SCHEMA] start to refresh and add schema(tenant_ids=[1]) [2024-09-13 13:02:40.279711] INFO [SHARE.SCHEMA] refresh_tenant_schema (ob_multi_version_schema_service.cpp:2806) [19945][SerScheQueue1][T0][YB42AC103323-000621F922060C80-0-0] [lt=13] [REFRESH_SCHEMA] start to refresh and add schema by tenant(tenant_id=1) [2024-09-13 13:02:40.282009] WDIAG [SHARE.LOCATION] nonblock_get (ob_location_service.cpp:129) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=-4721] fail to nonblock get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.282059] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:865) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=49][errcode=-4721] fail to get tablet locations(ret=-4721, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.282248] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.282538] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.282570] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=31][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.282588] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.282607] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.282629] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760282627, replica_locations:[]}) [2024-09-13 13:02:40.282653] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4721] get empty location from meta table(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.282672] WDIAG [SHARE.LOCATION] get (ob_ls_location_service.cpp:289) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4721] renew location failed(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}) [2024-09-13 13:02:40.282689] WDIAG [SHARE.LOCATION] get (ob_location_service.cpp:58) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4721] fail to get log stream location(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", cluster_id=1726203323, tenant_id=1, ls_id={id:1}, expire_renew_time=9223372036854775807, is_cache_hit=false, location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}) [2024-09-13 13:02:40.282721] WDIAG [SQL.DAS] block_renew_tablet_location (ob_das_location_router.cpp:1247) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=31][errcode=-4721] failed to get location(ls_id={id:1}, ret=-4721) [2024-09-13 13:02:40.282736] WDIAG [SQL.DAS] nonblock_get (ob_das_location_router.cpp:877) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14][errcode=-4721] block renew tablet location failed(tmp_ret=-4721, tmp_ret="OB_LS_LOCATION_NOT_EXIST", tablet_id={id:1}) [2024-09-13 13:02:40.282751] WDIAG [SQL.DAS] nonblock_get_candi_tablet_locations (ob_das_location_router.cpp:910) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14][errcode=-4721] Get partition error, the location cache will be renewed later(ret=-4721, tablet_id={id:1}, candi_tablet_loc={opt_tablet_loc:{partition_id:-1, tablet_id:{id:0}, ls_id:{id:-1}, replica_locations:[]}, selected_replica_idx:-1, priority_replica_idxs:[]}) [2024-09-13 13:02:40.282772] WDIAG [SQL.OPT] calculate_candi_tablet_locations (ob_table_location.cpp:1458) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4721] Failed to set partition locations(ret=-4721, partition_ids=[1], tablet_ids=[{id:1}]) [2024-09-13 13:02:40.282790] WDIAG [SQL.OPT] calculate_phy_table_location_info (ob_table_partition_info.cpp:96) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4721] Failed to calculate table location(ret=-4721) [2024-09-13 13:02:40.282842] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:104) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] already timeout, do not need sleep(sleep_us=0, remain_us=1996883, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=0, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.282993] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.283209] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.283236] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.283252] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.283268] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.283287] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760283286, replica_locations:[]}) [2024-09-13 13:02:40.283309] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.283343] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.283357] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.283385] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.283429] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568160546, cache_obj->added_lc()=false, cache_obj->get_object_id()=947, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.283679] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.283865] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.283890] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.283897] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.283904] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.283913] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.283922] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:40.283931] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:40.283937] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:169) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5][errcode=-4638] renew_master_rootserver failed(tmp_ret=-4638) [2024-09-13 13:02:40.284015] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.284198] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.284209] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.284214] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.284220] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.284226] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760284226, replica_locations:[]}) [2024-09-13 13:02:40.284239] WDIAG [SHARE.LOCATION] renew_location_ (ob_ls_location_service.cpp:1008) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=12][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:40.284251] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:171) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=0][errcode=-4638] fail to refresh core partition(tmp_ret=-4721) [2024-09-13 13:02:40.284419] WDIAG [RPC] send (ob_poc_rpc_proxy.h:170) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] execute rpc fail(addr="172.16.51.35:2882", pcode=258, ret=-4638, timeout=2000000) [2024-09-13 13:02:40.284429] WDIAG log_user_error_and_warn (ob_poc_rpc_proxy.cpp:246) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4638] [2024-09-13 13:02:40.284546] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.284806] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.284816] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.284821] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.284826] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.284834] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.284842] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:40.284856] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:40.284860] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=0) [2024-09-13 13:02:40.284961] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.285081] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.285120] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.285130] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.285134] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.285142] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.285146] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.285152] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:40.285159] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:40.285163] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=1) [2024-09-13 13:02:40.285215] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F921360C7D-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.285272] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.285302] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.285316] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.285336] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=18] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.285391] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=3][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.285400] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=9][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.285405] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.285412] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.285416] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:355) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] no leader finded(ret=-4638, ret="OB_RS_NOT_MASTER", leader_exist=false, ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.285423] INFO [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:366) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7] [RS_MGR] new master rootserver found(rootservice="0.0.0.0:0", cluster_id=1726203323) [2024-09-13 13:02:40.285427] WDIAG [SHARE] renew_master_rootserver (ob_rs_mgr.cpp:311) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=4][errcode=-4638] failed to renew master rootserver(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:40.285433] WDIAG [SHARE] rpc_call (ob_common_rpc_proxy.h:419) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=6][errcode=-4638] renew_master_rootserver failed(ret=-4638, retry=2) [2024-09-13 13:02:40.285450] WDIAG [SERVER] do_renew_lease (ob_lease_state_mgr.cpp:415) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=15][errcode=-4638] can't get lease from rs(rs_addr="172.16.51.35:2882", ret=-4638) [2024-09-13 13:02:40.285458] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:160) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] fail to do_renew_lease(ret=-4638, ret="OB_RS_NOT_MASTER") [2024-09-13 13:02:40.285465] WDIAG [SERVER] register_self_busy_wait (ob_lease_state_mgr.cpp:165) [19877][observer][T0][YB42AC103323-000621F921360C7D-0-0] [lt=7][errcode=-4638] register failed, will try again(ret=-4638, ret="OB_RS_NOT_MASTER", retry latency=2) [2024-09-13 13:02:40.285394] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760285393, replica_locations:[]}) [2024-09-13 13:02:40.285527] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=1000, remain_us=1994198, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=1, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.286752] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.286969] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.287003] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=32][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.287019] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.287036] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.287056] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760287055, replica_locations:[]}) [2024-09-13 13:02:40.287078] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.287107] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.287120] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.287153] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.287197] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568164313, cache_obj->added_lc()=false, cache_obj->get_object_id()=948, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.288268] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.288515] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.288546] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.288563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.288585] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.288604] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760288603, replica_locations:[]}) [2024-09-13 13:02:40.288663] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=2000, remain_us=1991063, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=2, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.290904] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.291126] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.291155] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.291171] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.291188] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.291207] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760291206, replica_locations:[]}) [2024-09-13 13:02:40.291229] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=21] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.291258] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.291273] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.291300] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.291347] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568168462, cache_obj->added_lc()=false, cache_obj->get_object_id()=949, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.292480] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.292719] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.292746] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.292762] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.292779] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.292796] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760292796, replica_locations:[]}) [2024-09-13 13:02:40.292854] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=3000, remain_us=1986872, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=3, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.296076] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.296307] INFO eloop_run (eloop.c:144) [19933][pnio2][T0][Y0-0000000000000000-0-0] [lt=22] PNIO [ratelimit] time: 1726203760296304, bytes: 0, bw: 0.000000 MB/s, add_ts: 1007624, add_bytes: 0 [2024-09-13 13:02:40.296358] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.296382] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.296400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.296417] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.296455] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=32] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760296455, replica_locations:[]}) [2024-09-13 13:02:40.296478] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.296506] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.296520] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.296556] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.296605] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568173720, cache_obj->added_lc()=false, cache_obj->get_object_id()=950, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.297752] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.297985] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.298013] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.298030] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.298047] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.298065] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760298064, replica_locations:[]}) [2024-09-13 13:02:40.298122] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=4000, remain_us=1981603, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=4, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.302449] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.302812] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.302842] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.302849] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.302862] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.302890] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760302888, replica_locations:[]}) [2024-09-13 13:02:40.302909] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.302936] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.302946] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.302972] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.303021] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568180137, cache_obj->added_lc()=false, cache_obj->get_object_id()=951, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.304207] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.304405] INFO [COMMON] replace_fragment_node (ob_kvcache_map.cpp:685) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=21] Cache replace map node details(ret=0, replace_node_count=0, replace_time=4067, replace_start_pos=692054, replace_num=62914) [2024-09-13 13:02:40.304435] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=28] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=10) [2024-09-13 13:02:40.304486] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.304510] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.304522] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.304535] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.304550] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760304549, replica_locations:[]}) [2024-09-13 13:02:40.304608] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=5000, remain_us=1975118, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=5, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.309857] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.310288] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.310314] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.310322] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.310332] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.310348] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760310347, replica_locations:[]}) [2024-09-13 13:02:40.310370] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.310396] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.310407] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.310455] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.310508] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568187623, cache_obj->added_lc()=false, cache_obj->get_object_id()=952, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.311634] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.311927] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.311955] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.311963] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.311972] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.311983] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760311982, replica_locations:[]}) [2024-09-13 13:02:40.312043] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=6000, remain_us=1967682, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=6, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.318306] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.318710] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.318737] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=26][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.318745] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.318755] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.318768] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760318767, replica_locations:[]}) [2024-09-13 13:02:40.318786] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.318808] WDIAG [SERVER] after_func (ob_query_retry_ctrl.cpp:968) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:40.318829] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.318837] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.318860] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.318923] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568196037, cache_obj->added_lc()=false, cache_obj->get_object_id()=953, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.320076] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.320358] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.320383] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.320391] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.320400] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.320411] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760320410, replica_locations:[]}) [2024-09-13 13:02:40.320486] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=7000, remain_us=1959239, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=7, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.327735] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.328043] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.328074] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.328082] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.328093] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.328108] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760328107, replica_locations:[]}) [2024-09-13 13:02:40.328127] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.328155] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.328166] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.328199] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.328252] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568205366, cache_obj->added_lc()=false, cache_obj->get_object_id()=954, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.329538] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.329849] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.329889] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=39][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.329897] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.329907] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.329921] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760329920, replica_locations:[]}) [2024-09-13 13:02:40.329987] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=8000, remain_us=1949738, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=8, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.330322] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=31] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:40.333067] INFO pn_ratelimit (group.c:643) [20054][IngressService][T0][Y0-0000000000000000-0-0] [lt=17] PNIO set ratelimit as 9223372036854775807 bytes/s, grp_id=2 [2024-09-13 13:02:40.338216] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.338613] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.338642] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.338650] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.338660] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.338675] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760338674, replica_locations:[]}) [2024-09-13 13:02:40.338693] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.338722] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.338732] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.338756] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.338811] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568215925, cache_obj->added_lc()=false, cache_obj->get_object_id()=955, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.340110] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.340406] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.340430] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.340451] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.340462] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.340474] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760340473, replica_locations:[]}) [2024-09-13 13:02:40.340542] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=9000, remain_us=1939183, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=9, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.349248] WDIAG [SHARE.LOCATION] nonblock_get_leader (ob_ls_location_service.cpp:448) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=23][errcode=-4721] REACH SYSLOG RATE LIMIT [bandwidth] [2024-09-13 13:02:40.349327] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=1] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:40.349345] INFO [STORAGE.TRANS] operator() (ob_ts_mgr.h:179) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] refresh gts functor(ret=-4721, ret="OB_LS_LOCATION_NOT_EXIST", gts_tenant_info={v:1}) [2024-09-13 13:02:40.349335] WDIAG [SHARE.LOCATION] batch_process_tasks (ob_ls_location_service.cpp:549) [19987][SysLocAsyncUp0][T0][YB42AC103323-000621F920B60CEF-0-0] [lt=18][errcode=0] tenant schema is not ready, need wait(ret=0, ret="OB_SUCCESS", superior_tenant_id=1, task={cluster_id:1726203323, tenant_id:1, ls_id:{id:1}, renew_for_tenant:false, add_timestamp:1726203760349290}) [2024-09-13 13:02:40.349810] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.349862] INFO [STORAGE.BLKMGR] runTimerTask (ob_block_manager.cpp:1573) [20005][BlkMgr][T0][Y0-0000000000000000-0-0] [lt=21] skip inspect bad block(last_check_time=1726203737347795, last_macro_idx=-1) [2024-09-13 13:02:40.350156] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.350180] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.350187] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.350197] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.350217] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760350216, replica_locations:[]}) [2024-09-13 13:02:40.350234] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.350263] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.350274] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.350306] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.350364] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568227478, cache_obj->added_lc()=false, cache_obj->get_object_id()=956, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.351604] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.351999] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.352027] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.352035] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.352044] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.352056] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760352055, replica_locations:[]}) [2024-09-13 13:02:40.352120] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=10000, remain_us=1927605, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=10, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.362369] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.362660] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.362691] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.362700] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.362711] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.362727] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760362725, replica_locations:[]}) [2024-09-13 13:02:40.362763] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=35] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.362792] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.362802] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.362827] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.362894] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568239997, cache_obj->added_lc()=false, cache_obj->get_object_id()=957, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.364071] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.364100] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=27][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760364062) [2024-09-13 13:02:40.364110] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203760263950, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.364131] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.364141] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.364146] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760364117) [2024-09-13 13:02:40.364262] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=32][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.364580] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.364618] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.364626] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.364636] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.364651] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760364650, replica_locations:[]}) [2024-09-13 13:02:40.364718] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=11000, remain_us=1915007, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=11, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.372337] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5F-0-0] [lt=15] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:40.372368] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B5F-0-0] [lt=29][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203760371800], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:40.372903] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEF-0-0] [lt=16][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:40.373450] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DEF-0-0] [lt=18][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:40.375955] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.379102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.379132] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.379143] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.379156] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.379175] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760379173, replica_locations:[]}) [2024-09-13 13:02:40.379200] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.379236] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.379249] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.379278] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.379337] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568256453, cache_obj->added_lc()=false, cache_obj->get_object_id()=958, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.380419] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.380763] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.380785] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.380794] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.380810] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.380827] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760380825, replica_locations:[]}) [2024-09-13 13:02:40.380908] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=12000, remain_us=1898818, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=12, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.393164] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=46][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.393528] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.393553] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.393563] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.393580] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.393599] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760393598, replica_locations:[]}) [2024-09-13 13:02:40.393621] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.393653] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.393665] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.393705] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.393766] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568270879, cache_obj->added_lc()=false, cache_obj->get_object_id()=959, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.394884] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.395212] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.395233] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.395242] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.395257] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.395273] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760395272, replica_locations:[]}) [2024-09-13 13:02:40.395341] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=13000, remain_us=1884385, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=13, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.395891] INFO [STORAGE] operator() (ob_tenant_freezer.cpp:131) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=17] ====== tenant freeze timer task ====== [2024-09-13 13:02:40.395932] WDIAG [STORAGE] get_tenant_tx_data_mem_used_ (ob_tenant_freezer.cpp:614) [20142][T1_Occam][T1][Y0-0000000000000000-0-0] [lt=25][errcode=0] [TenantFreezer] no logstream(ret=0, ret="OB_SUCCESS", ls_cnt=0, tenant_info_={slow_freeze:false, slow_freeze_timestamp:0, freeze_interval:0, last_freeze_timestamp:0, slow_tablet:{id:0}}) [2024-09-13 13:02:40.408650] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.409160] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.409183] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.409190] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.409199] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.409211] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760409210, replica_locations:[]}) [2024-09-13 13:02:40.409224] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.409253] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.409262] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.409284] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.409334] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568286451, cache_obj->added_lc()=false, cache_obj->get_object_id()=960, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.410428] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.410683] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.410708] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.410717] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.410729] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.410741] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760410740, replica_locations:[]}) [2024-09-13 13:02:40.410808] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=14000, remain_us=1868918, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=14, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.418904] INFO [SERVER] async_refresh_schema (ob_server_schema_updater.cpp:485) [20287][T1_L0_G0][T1][YB42AC103326-00062119ED45FECA-0-0] [lt=23] schedule async refresh schema task(ret=0, ret="OB_SUCCESS", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.422603] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20282][T1_L5_G0][T1][YB42AC103326-00062119ED82E19E-0-0] [lt=19][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.425073] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=34][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.425380] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.425403] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.425411] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.425419] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.425432] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760425431, replica_locations:[]}) [2024-09-13 13:02:40.425479] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=44] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.425502] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.425511] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.425540] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.425592] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568302709, cache_obj->added_lc()=false, cache_obj->get_object_id()=961, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.426745] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.426998] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.427018] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.427024] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.427033] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.427043] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760427042, replica_locations:[]}) [2024-09-13 13:02:40.427096] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=15000, remain_us=1852630, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=15, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.437257] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20289][T1_L0_G0][T1][YB42AC103326-00062119EC6C651F-0-0] [lt=31][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.442370] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.442698] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.442721] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.442732] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.442746] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.442767] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760442765, replica_locations:[]}) [2024-09-13 13:02:40.442795] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=25] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.442825] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.442837] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.442889] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.442952] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568320064, cache_obj->added_lc()=false, cache_obj->get_object_id()=962, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.444183] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.444531] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.444567] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=35][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.444574] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.444583] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.444596] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760444595, replica_locations:[]}) [2024-09-13 13:02:40.444658] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=16000, remain_us=1835068, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=16, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.458561] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F92169006B-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.460893] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.461152] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.461175] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.461184] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.461199] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.461219] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760461217, replica_locations:[]}) [2024-09-13 13:02:40.461241] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.461271] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.461283] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.461321] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.461384] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568338496, cache_obj->added_lc()=false, cache_obj->get_object_id()=963, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.462647] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.462909] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.462938] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.462949] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.462961] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.462981] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760462980, replica_locations:[]}) [2024-09-13 13:02:40.463060] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=17000, remain_us=1816665, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=17, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.464033] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEE-0-0] [lt=31][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760463538) [2024-09-13 13:02:40.464074] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEE-0-0] [lt=35][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760463538}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.464104] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.464114] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.464122] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760464089) [2024-09-13 13:02:40.468830] INFO [LIB] log_compress_loop_ (ob_log_compressor.cpp:393) [19885][SyslogCompress][T0][Y0-0000000000000000-0-0] [lt=25] log compressor cycles once. (ret=0, cost_time=1072, compressed_file_count=0, deleted_file_count=0, disk_remaining_size=182289346560) [2024-09-13 13:02:40.480364] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.480661] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.480685] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.480695] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.480711] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.480731] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760480729, replica_locations:[]}) [2024-09-13 13:02:40.480754] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.480786] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.480798] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.480824] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.480902] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568358015, cache_obj->added_lc()=false, cache_obj->get_object_id()=964, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.480938] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.481307] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=14][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.482060] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=10][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.482080] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.482263] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.482283] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.482293] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.482307] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.482310] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=4][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.482324] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760482323, replica_locations:[]}) [2024-09-13 13:02:40.482392] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=18000, remain_us=1797334, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=18, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.482542] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103325-000621F921290056-0-0] [lt=8][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.500695] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.500952] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.500982] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.500991] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.501003] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.501023] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760501022, replica_locations:[]}) [2024-09-13 13:02:40.501043] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.501073] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.501084] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.501122] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.501187] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568378300, cache_obj->added_lc()=false, cache_obj->get_object_id()=965, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.502597] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.502904] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.502934] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.502948] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.502962] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.502976] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760502975, replica_locations:[]}) [2024-09-13 13:02:40.503059] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=19000, remain_us=1776667, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=19, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.504528] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=9) [2024-09-13 13:02:40.522301] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.522584] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.522614] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.522622] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.522637] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.522655] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760522654, replica_locations:[]}) [2024-09-13 13:02:40.522719] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=61] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.522746] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.522755] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.522777] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.522831] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568399945, cache_obj->added_lc()=false, cache_obj->get_object_id()=966, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.523881] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.524083] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.524102] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.524108] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.524115] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.524123] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760524123, replica_locations:[]}) [2024-09-13 13:02:40.524183] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=20000, remain_us=1755543, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=20, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.530651] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=29] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:40.544401] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.544671] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.544694] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.544701] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.544709] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.544720] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760544720, replica_locations:[]}) [2024-09-13 13:02:40.544735] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.544757] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.544765] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.544803] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.544850] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568421965, cache_obj->added_lc()=false, cache_obj->get_object_id()=967, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.545918] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.546092] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.546192] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=98][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.546204] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.546212] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.546222] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760546221, replica_locations:[]}) [2024-09-13 13:02:40.546268] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=21000, remain_us=1733458, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=21, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.564110] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEF-0-0] [lt=26][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760563662) [2024-09-13 13:02:40.564146] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AEF-0-0] [lt=30][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760563662}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.564172] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.564212] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=24][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760564162) [2024-09-13 13:02:40.564223] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203760364116, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.564236] INFO [STORAGE.TRANS] self_check (ob_tenant_weak_read_cluster_service.cpp:833) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9] [WRS] [TENANT_WEAK_READ_SERVICE] [CLUSTER_SERVICE] [SELF_CHECK] done(ret=0, ret="OB_SUCCESS", tenant_id=1, need_start_service=false, need_stop_service=false, need_change_leader=false, is_in_service()=false, can_update_version=false, cur_leader_epoch=0, start_service_tstamp_=0, error_count_for_change_leader_=0, last_error_tstamp_for_change_leader_=0) [2024-09-13 13:02:40.564262] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=12][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.564270] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.564277] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760564248) [2024-09-13 13:02:40.567521] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.567839] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.567858] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.567865] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.567886] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.567897] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760567897, replica_locations:[]}) [2024-09-13 13:02:40.567911] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.567932] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.567942] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.567961] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.568007] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568445123, cache_obj->added_lc()=false, cache_obj->get_object_id()=968, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.569021] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.569273] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.569290] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.569296] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.569303] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.569311] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760569310, replica_locations:[]}) [2024-09-13 13:02:40.569357] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=22000, remain_us=1710368, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=22, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.591580] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.591905] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.591925] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.591931] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.591938] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.591952] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760591951, replica_locations:[]}) [2024-09-13 13:02:40.591966] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.591987] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.591996] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.592033] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.592077] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568469194, cache_obj->added_lc()=false, cache_obj->get_object_id()=969, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.593210] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.593300] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.593317] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.593323] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.593333] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.593344] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760593344, replica_locations:[]}) [2024-09-13 13:02:40.593391] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=23000, remain_us=1686335, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=23, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.616632] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.617184] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.617204] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.617210] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.617221] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.617235] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760617235, replica_locations:[]}) [2024-09-13 13:02:40.617250] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.617273] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.617282] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.617305] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.617350] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568494466, cache_obj->added_lc()=false, cache_obj->get_object_id()=970, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.618327] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.618535] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.618552] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.618561] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.618571] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.618583] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760618582, replica_locations:[]}) [2024-09-13 13:02:40.618633] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=24000, remain_us=1661092, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=24, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.630420] INFO [COMMON] print_sender_status (ob_io_struct.cpp:871) [19887][IO_TUNING0][T0][Y0-0000000000000000-0-0] [lt=45] [IO STATUS SENDER](*this=send_index: 1, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 2, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 3, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 4, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 5, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 6, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 7, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 8, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 9, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 10, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 11, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 12, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 13, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 14, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 15, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; send_index: 16, req_count: 0, reservation_ts: 9223372036854775807, group_limitation_ts: 9223372036854775807, tenant_limitation_ts: 9223372036854775807, proportion_ts: 9223372036854775807; ) [2024-09-13 13:02:40.642867] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.643197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.643213] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.643220] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.643230] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.643245] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760643244, replica_locations:[]}) [2024-09-13 13:02:40.643259] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.643280] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.643289] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.643330] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.643376] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568520493, cache_obj->added_lc()=false, cache_obj->get_object_id()=971, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.644502] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.644696] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.644714] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=18][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.644721] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.644731] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.644740] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760644740, replica_locations:[]}) [2024-09-13 13:02:40.644791] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=25000, remain_us=1634934, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=25, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.664252] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=8][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.664270] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=17][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.664277] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760664233) [2024-09-13 13:02:40.670044] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.670329] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.670352] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.670359] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.670374] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.670389] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760670388, replica_locations:[]}) [2024-09-13 13:02:40.670405] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.670445] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.670454] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.670474] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.670522] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568547639, cache_obj->added_lc()=false, cache_obj->get_object_id()=972, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.671573] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.671798] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.671818] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.671824] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.671833] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.671843] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760671843, replica_locations:[]}) [2024-09-13 13:02:40.671907] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=26000, remain_us=1607819, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=26, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.698185] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.698503] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.698526] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.698533] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.698542] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.698558] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760698557, replica_locations:[]}) [2024-09-13 13:02:40.698573] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.698599] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.698609] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.698630] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.698678] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568575794, cache_obj->added_lc()=false, cache_obj->get_object_id()=973, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.699858] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.700035] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.700053] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.700060] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.700071] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.700083] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760700083, replica_locations:[]}) [2024-09-13 13:02:40.700135] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=27000, remain_us=1579590, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=27, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.704620] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=13] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=8) [2024-09-13 13:02:40.727372] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=27][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.727625] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.727648] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.727656] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.727664] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.727676] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=5] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760727675, replica_locations:[]}) [2024-09-13 13:02:40.727691] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.727715] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.727724] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=7][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.727759] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.727808] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568604923, cache_obj->added_lc()=false, cache_obj->get_object_id()=974, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.728865] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.729006] INFO [SERVER] prepare_alloc_queue (ob_dl_queue.cpp:52) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=16] Construct Queue Num(construct_num=0, get_push_idx()=8, get_cur_idx()=0, get_pop_idx()=0) [2024-09-13 13:02:40.729068] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.729086] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.729092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.729099] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.729109] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760729108, replica_locations:[]}) [2024-09-13 13:02:40.729159] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=28000, remain_us=1550567, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=28, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.729161] INFO [SERVER] runTimerTask (ob_eliminate_task.cpp:225) [20246][T1_ReqMemEvict][T1][Y0-0000000000000000-0-0] [lt=15] sql audit evict task end(request_manager_->get_tenant_id()=1, evict_high_mem_level=75665245, evict_high_size_level=471859, evict_batch_count=0, elapse_time=0, size_used=0, mem_used=16637952) [2024-09-13 13:02:40.730987] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=26] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:40.757383] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.757685] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.757706] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.757713] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=6] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.757724] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.757739] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760757738, replica_locations:[]}) [2024-09-13 13:02:40.757754] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.757779] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.757789] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.757810] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.757858] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568634975, cache_obj->added_lc()=false, cache_obj->get_object_id()=975, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.758851] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=26][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.759058] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.759081] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=23][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.759092] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.759105] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.759117] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760759116, replica_locations:[]}) [2024-09-13 13:02:40.759171] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=29000, remain_us=1520555, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=29, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.764298] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AF0-0-0] [lt=21][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760763821) [2024-09-13 13:02:40.764306] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.764327] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=20][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760764299) [2024-09-13 13:02:40.764337] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203760564232, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.764325] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AF0-0-0] [lt=26][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760763821}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.764360] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.764366] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.764371] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760764346) [2024-09-13 13:02:40.764385] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.764389] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.764392] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=3][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760764379) [2024-09-13 13:02:40.788459] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.788831] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.788908] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=75][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.788929] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.788945] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.788967] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760788966, replica_locations:[]}) [2024-09-13 13:02:40.788987] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=18] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.789021] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.789034] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.789078] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.789137] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568666253, cache_obj->added_lc()=false, cache_obj->get_object_id()=976, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.790149] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=25][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.790392] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.790412] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.790423] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.790454] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=29] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.790469] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760790468, replica_locations:[]}) [2024-09-13 13:02:40.790523] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=30000, remain_us=1489203, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=30, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.820785] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=118][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.821077] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.821106] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.821118] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.821131] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.821148] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760821147, replica_locations:[]}) [2024-09-13 13:02:40.821166] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.821193] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.821204] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.821228] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.821293] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=8][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568698407, cache_obj->added_lc()=false, cache_obj->get_object_id()=977, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.822368] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.822565] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.822587] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.822598] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.822610] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.822623] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760822622, replica_locations:[]}) [2024-09-13 13:02:40.822679] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=31000, remain_us=1457047, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=31, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.827465] WDIAG [SERVER] submit_async_refresh_schema_task (ob_service.cpp:534) [20293][T1_L0_G0][T1][YB42AC103326-00062119ED72BEA6-0-0] [lt=24][errcode=-4023] fail to async refresh schema(ret=-4023, ret="OB_EAGAIN", tenant_id=1, schema_version=1725265416329232) [2024-09-13 13:02:40.849798] INFO [STORAGE.TRANS] refresh_gts_location_ (ob_gts_source.cpp:584) [20031][TsMgr][T0][Y0-0000000000000000-0-0] [lt=0] gts nonblock renew success(ret=0, tenant_id=1, gts_local_cache={srr:[mts=0], gts:0, latest_srr:[mts=0]}) [2024-09-13 13:02:40.853921] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.854230] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.854253] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.854264] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.854292] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=26] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.854308] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760854307, replica_locations:[]}) [2024-09-13 13:02:40.854323] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.854349] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.854360] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.854393] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.854458] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568731559, cache_obj->added_lc()=false, cache_obj->get_object_id()=978, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.855544] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.855704] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.855725] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.855735] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.855747] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.855760] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760855759, replica_locations:[]}) [2024-09-13 13:02:40.855818] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=32000, remain_us=1423907, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=32, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.864312] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AF1-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760863906) [2024-09-13 13:02:40.864339] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AF1-0-0] [lt=24][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760863906}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.864390] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.864406] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=15][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760864384) [2024-09-13 13:02:40.864413] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203760764344, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.864436] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.864455] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=19][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.864460] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760864424) [2024-09-13 13:02:40.872582] INFO [STORAGE.TRANS] handle_request (ob_timestamp_access.cpp:32) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B60-0-0] [lt=13] ObTimestampAccess service type is FOLLOWER(ret=-4038, service_type=0) [2024-09-13 13:02:40.872601] WDIAG [STORAGE.TRANS] process (ob_gts_rpc.cpp:90) [20302][T1_L0_G5][T1][YB42AC103326-00062119EC8D7B60-0-0] [lt=18][errcode=-4038] handle request failed(ret=-4038, ret="OB_NOT_MASTER", arg_={tenant_id:1, srr:[mts=1726203760872242], range_size:1, sender:"172.16.51.38:2882"}) [2024-09-13 13:02:40.872967] WDIAG [RPC.FRAME] check_cluster_id (ob_rpc_processor_base.cpp:135) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DF0-0-0] [lt=14][errcode=-8004] packet dst_cluster_id not match(ret=-8004, self.dst_cluster_id=1726203323, pkt.dst_cluster_id=1724378954, pkt={hdr_:{checksum_:1369453673, pcode_:330, hlen_:184, priority_:1, flags_:6151, tenant_id_:1, priv_tenant_id_:0, session_id_:0, trace_id_:12380982489894, timeout_:2000000, timestamp:1726203760872639, dst_cluster_id:1724378954, cost_time:{len:40, arrival_push_diff:0, push_pop_diff:0, pop_process_start_diff:0, process_start_end_diff:0, process_end_response_diff:0, packet_id:62035980, request_arrival_time:0}, compressor_type_:0, original_len_:0, src_cluster_id_:1724378954, seq_no_:1726203760872202}, chid_:0, clen_:35, assemble:false, msg_count:0, payload:0}) [2024-09-13 13:02:40.872998] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DF0-0-0] [lt=30][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:40.873132] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20033][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.873584] WDIAG [RPC.FRAME] run (ob_rpc_processor_base.cpp:79) [20300][T1_L0_G9][T1][YB42AC103326-00062119D7143DF0-0-0] [lt=5][errcode=-8004] checking cluster ID failed(ret=-8004) [2024-09-13 13:02:40.873813] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20034][RpcIO][T0][Y0-0000000000000000-0-0] [lt=19] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.873896] INFO [RPC.FRAME] rpc_easy_timer_cb (ob_net_easy.cpp:603) [20032][RpcIO][T0][Y0-0000000000000000-0-0] [lt=17] [RPC EASY STAT](log_str=conn count=0/0, request done=0/0, request doing=0/0) [2024-09-13 13:02:40.888040] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.888342] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.888371] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=28][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.888387] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.888405] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.888480] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=14] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760888479, replica_locations:[]}) [2024-09-13 13:02:40.888518] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=35] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.888552] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.888567] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.888597] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.888654] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568765769, cache_obj->added_lc()=false, cache_obj->get_object_id()=979, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.889924] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=16][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.890121] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.890144] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.890155] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.890167] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.890180] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760890179, replica_locations:[]}) [2024-09-13 13:02:40.890232] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=33000, remain_us=1389493, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=33, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.904711] INFO [COMMON] replace_map (ob_kv_storecache.cpp:746) [19911][KVCacheRep][T0][Y0-0000000000000000-0-0] [lt=19] replace map num details(ret=0, replace_node_count=0, map_once_replace_num_=62914, map_replace_skip_count_=7) [2024-09-13 13:02:40.923490] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=17][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.923822] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.923852] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=30][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.923869] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=16] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.923895] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.923912] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760923911, replica_locations:[]}) [2024-09-13 13:02:40.923928] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.923955] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.923967] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.924016] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.924068] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568801185, cache_obj->added_lc()=false, cache_obj->get_object_id()=980, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.925090] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.925311] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.925331] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.925347] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.925358] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.925371] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760925370, replica_locations:[]}) [2024-09-13 13:02:40.925423] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1] will sleep(sleep_us=34000, remain_us=1354302, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=34, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.931302] INFO [COMMON] compute_tenant_wash_size (ob_kvcache_store.cpp:1155) [19910][KVCacheWash][T0][Y0-0000000000000000-0-0] [lt=24] Wash compute wash size(is_wash_valid=false, sys_total_wash_size=-14016055706, global_cache_size=0, tenant_max_wash_size=0, tenant_min_wash_size=0, tenant_ids_=[500, 508, 1]) [2024-09-13 13:02:40.959693] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=19][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.960032] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.960057] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.960069] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.960082] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=11] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.960098] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760960097, replica_locations:[]}) [2024-09-13 13:02:40.960115] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=15] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.960141] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.960152] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.960183] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.960233] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568837348, cache_obj->added_lc()=false, cache_obj->get_object_id()=981, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.961261] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=31][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.961478] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.961499] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.961510] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.961522] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.961534] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760961533, replica_locations:[]}) [2024-09-13 13:02:40.961589] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=35000, remain_us=1318137, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=35, v.err_=-4721, timeout_timestamp=1726203762279725) [2024-09-13 13:02:40.964459] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_tenant_weak_read_service.cpp:474) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AF2-0-0] [lt=23][errcode=-4341] process cluster heartbeat rpc: self is not in cluster service(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id_=1, svr="172.16.51.38:2882", version={val:1726203695813668000, v:0}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760963996) [2024-09-13 13:02:40.964477] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc_cb (ob_tenant_weak_read_service.cpp:452) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=5][errcode=-4076] tenant weak read service cluster heartbeat RPC fail(ret=-4076, rcode={code:-4076, msg:"post cluster heartbeat rpc failed, tenant_id=1", warnings:[]}, tenant_id_=1, dst="172.16.51.35:2882", cluster_service_tablet_id={id:226}) [2024-09-13 13:02:40.964495] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:869) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=16][errcode=-4076] post cluster heartbeat rpc fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, local_server_version={val:18446744073709551615, v:3}, valid_part_count=0, total_part_count=0, generate_timestamp=1726203760964471) [2024-09-13 13:02:40.964486] WDIAG [STORAGE.TRANS] process_cluster_heartbeat_rpc (ob_weak_read_service.cpp:233) [20278][T1_L0_G0][T1][YB42AC103326-00062119ED1D6AF2-0-0] [lt=25][errcode=-4341] tenant weak read service process cluster heartbeat RPC fail(ret=-4341, ret="OB_NOT_IN_SERVICE", tenant_id=1, req={req_server:"172.16.51.38:2882", version:{val:1726203695813668000, v:0}, valid_part_count:0, total_part_count:0, generate_timestamp:1726203760963996}, twrs={inited:true, tenant_id:1, self:"172.16.51.35:2882", svr_version_mgr:{server_version:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}, server_version_for_stat:{version:{val:18446744073709551615, v:3}, total_part_count:0, valid_inner_part_count:0, valid_user_part_count:0, epoch_tstamp:0}}, cluster_service:{current_version:{val:0, v:0}, min_version:{val:0, v:0}, max_version:{val:0, v:0}}}) [2024-09-13 13:02:40.964503] WDIAG [STORAGE.TRANS] do_cluster_heartbeat_ (ob_tenant_weak_read_service.cpp:879) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=9][errcode=-4076] tenant weak read service do cluster heartbeat fail(ret=-4076, ret="OB_NEED_WAIT", tenant_id_=1, last_post_cluster_heartbeat_tstamp_=1726203760864422, cluster_heartbeat_interval_=1000000, cluster_service_tablet_id={id:226}, cluster_service_master="0.0.0.0:0") [2024-09-13 13:02:40.964525] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=7][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.964537] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=11][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.964542] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760964512) [2024-09-13 13:02:40.964561] WDIAG [STORAGE.TRANS] generate_min_weak_read_version (ob_weak_read_util.cpp:83) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=13][errcode=-4023] get gts cache error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.964568] WDIAG [STORAGE.TRANS] generate_server_version (ob_tenant_weak_read_service.cpp:317) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=6][errcode=-4023] generate min weak read version error(ret=-4023, tenant_id=1) [2024-09-13 13:02:40.964571] WDIAG [STORAGE.TRANS] generate_tenant_weak_read_timestamp_ (ob_tenant_weak_read_service.cpp:597) [20247][T1_TenantWeakRe][T1][Y0-0000000000000000-0-0] [lt=4][errcode=-4023] generate server version for tenant fail(ret=-4023, ret="OB_EAGAIN", tenant_id=1, index=0x2b07960dbf50, server_version_epoch_tstamp_=1726203760964557) [2024-09-13 13:02:40.996832] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=15][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.997197] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=12][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.997235] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=37][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.997260] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=24] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.997279] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=17] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.997300] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=13] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760997299, replica_locations:[]}) [2024-09-13 13:02:40.997322] INFO [SHARE.LOCATION] batch_renew_tablet_locations (ob_location_service.cpp:442) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=20] [TABLET_LOCATION] batch renew tablet locations finished(ret=0, ret="OB_SUCCESS", tenant_id=1, renew_type=0, is_nonblock=false, tablet_list=[{id:1}], ls_ids=[{id:1}], error_code=-4721) [2024-09-13 13:02:40.997348] WDIAG [SQL] do_close_plan (ob_result_set.cpp:825) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=-4006] exec result is null(ret=-4006) [2024-09-13 13:02:40.997359] WDIAG [SQL] do_close (ob_result_set.cpp:922) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] fail close main query(ret=0, do_close_plan_ret=-4006) [2024-09-13 13:02:40.997389] WDIAG [SQL] move_to_sqlstat_cache (ob_sql_stat_record.cpp:352) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0][errcode=0] the key is not valid which at plan cache mgr(ret=0, ret="OB_SUCCESS") [2024-09-13 13:02:40.997446] WDIAG [SQL.PC] common_free (ob_lib_cache_object_manager.cpp:141) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=9][errcode=0] set logical del time(cache_obj->get_logical_del_time()=6568874554, cache_obj->added_lc()=false, cache_obj->get_object_id()=982, cache_obj->get_tenant_id()=1, lbt()="0x24edc06b 0xbd5c4fe 0x24dd7500 0x24f2d5e3 0x24f31630 0x5bdd4a9 0x24ed6bf8 0x24ed6972 0x5ccd5d8 0x24edda27 0x251869b8 0x25187b92 0x12a45933 0x13373523 0x2574e186 0x5d8db56 0x1105a477 0x25104944 0x2512094d 0x1428ea01 0x1428a35f 0x2b0795d89dd5 0x2b079609bead") [2024-09-13 13:02:40.998565] WDIAG [SERVER] fill_ls_replica (ob_service.cpp:2761) [20300][T1_L0_G9][T1][YB42AC103323-000621F922060C80-0-0] [lt=21][errcode=-4719] get ls handle failed(ret=-4719, ret="OB_LS_NOT_EXIST") [2024-09-13 13:02:40.998801] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=1][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.998824] WDIAG [SHARE.PT] find_leader (ob_ls_info.cpp:847) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=22][errcode=-4018] fail to get leader replica(ret=-4018, ret="OB_ENTRY_NOT_EXIST", *this={tenant_id:1, ls_id:{id:1}, replicas:[]}, replica count=0) [2024-09-13 13:02:40.998835] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:140) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] leader doesn't exist, try use all_server_list(tmp_ret=-4018, tmp_ret="OB_ENTRY_NOT_EXIST", ls_info={tenant_id:1, ls_id:{id:1}, replicas:[]}) [2024-09-13 13:02:40.998846] INFO [SHARE.PT] get_ls_info_ (ob_rpc_ls_table.cpp:151) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] server_list is empty, do nothing(ret=0, ret="OB_SUCCESS", server_list=[]) [2024-09-13 13:02:40.998860] INFO [SHARE.LOCATION] batch_update_caches_ (ob_ls_location_service.cpp:944) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=10] [LS_LOCATION]ls location cache has changed(ret=0, ret="OB_SUCCESS", old_location={cache_key:{tenant_id:0, ls_id:{id:-1}, cluster_id:-1}, renew_time:0, replica_locations:[]}, new_location={cache_key:{tenant_id:1, ls_id:{id:1}, cluster_id:1726203323}, renew_time:1726203760998859, replica_locations:[]}) [2024-09-13 13:02:40.998929] INFO [SERVER] sleep_before_local_retry (ob_query_retry_ctrl.cpp:92) [19945][SerScheQueue1][T1][YB42AC103323-000621F922060C80-0-0] [lt=0] will sleep(sleep_us=36000, remain_us=1280797, base_sleep_us=1000, retry_sleep_type=1, v.stmt_retry_times_=36, v.err_=-4721, timeout_timestamp=1726203762279725)